WorldWideScience

Sample records for linear scaling computation

  1. On Numerical Stability in Large Scale Linear Algebraic Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Liesen, J.

    2005-01-01

    Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005

  2. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  3. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  4. Design of large scale applications of secure multiparty computation : secure linear programming

    NARCIS (Netherlands)

    Hoogh, de S.J.A.

    2012-01-01

    Secure multiparty computation is a basic concept of growing interest in modern cryptography. It allows a set of mutually distrusting parties to perform a computation on their private information in such a way that as little as possible is revealed about each private input. The early results of

  5. Linear programming computation

    CERN Document Server

    PAN, Ping-Qi

    2014-01-01

    With emphasis on computation, this book is a real breakthrough in the field of LP. In addition to conventional topics, such as the simplex method, duality, and interior-point methods, all deduced in a fresh and clear manner, it introduces the state of the art by highlighting brand-new and advanced results, including efficient pivot rules, Phase-I approaches, reduced simplex methods, deficient-basis methods, face methods, and pivotal interior-point methods. In particular, it covers the determination of the optimal solution set, feasible-point simplex method, decomposition principle for solving large-scale problems, controlled-branch method based on generalized reduced simplex framework for solving integer LP problems.

  6. Introduction to computational linear algebra

    CERN Document Server

    Nassif, Nabil; Erhel, Jocelyne

    2015-01-01

    Introduction to Computational Linear Algebra introduces the reader with a background in basic mathematics and computer programming to the fundamentals of dense and sparse matrix computations with illustrating examples. The textbook is a synthesis of conceptual and practical topics in ""Matrix Computations."" The book's learning outcomes are twofold: to understand state-of-the-art computational tools to solve matrix computations problems (BLAS primitives, MATLAB® programming) as well as essential mathematical concepts needed to master the topics of numerical linear algebra. It is suitable for s

  7. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  8. Computational linear and commutative algebra

    CERN Document Server

    Kreuzer, Martin

    2016-01-01

    This book combines, in a novel and general way, an extensive development of the theory of families of commuting matrices with applications to zero-dimensional commutative rings, primary decompositions and polynomial system solving. It integrates the Linear Algebra of the Third Millennium, developed exclusively here, with classical algorithmic and algebraic techniques. Even the experienced reader will be pleasantly surprised to discover new and unexpected aspects in a variety of subjects including eigenvalues and eigenspaces of linear maps, joint eigenspaces of commuting families of endomorphisms, multiplication maps of zero-dimensional affine algebras, computation of primary decompositions and maximal ideals, and solution of polynomial systems. This book completes a trilogy initiated by the uncharacteristically witty books Computational Commutative Algebra 1 and 2 by the same authors. The material treated here is not available in book form, and much of it is not available at all. The authors continue to prese...

  9. Computational aspects of linear control

    CERN Document Server

    2002-01-01

    Many devices (we say dynamical systems or simply systems) behave like black boxes: they receive an input, this input is transformed following some laws (usually a differential equation) and an output is observed. The problem is to regulate the input in order to control the output, that is for obtaining a desired output. Such a mechanism, where the input is modified according to the output measured, is called feedback. The study and design of such automatic processes is called control theory. As we will see, the term system embraces any device and control theory has a wide variety of applications in the real world. Control theory is an interdisci­ plinary domain at the junction of differential and difference equations, system theory and statistics. Moreover, the solution of a control problem involves many topics of numerical analysis and leads to many interesting computational problems: linear algebra (QR, SVD, projections, Schur complement, structured matrices, localization of eigenvalues, computation of the...

  10. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  11. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  12. Computer Program For Linear Algebra

    Science.gov (United States)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  13. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    Science.gov (United States)

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  14. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  15. Linear scaling of density functional algorithms

    International Nuclear Information System (INIS)

    Stechel, E.B.; Feibelman, P.J.; Williams, A.R.

    1993-01-01

    An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm

  16. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  17. Numerical computation of linear instability of detonations

    Science.gov (United States)

    Kabanov, Dmitry; Kasimov, Aslan

    2017-11-01

    We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.

  18. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  19. Computational Complexity of Bosons in Linear Networks

    Science.gov (United States)

    2017-03-01

    is between one and two orders-of-magnitude more efficient than current heralded multiphoton sources based on spontaneous parametric downconversion...expected to perform tasks intractable for a classical computer, yet requiring minimal non-classical resources as compared to full- scale quantum computers...implementations to date employed sources based on inefficient processes—spontaneous parametric downconversion—that only simulate heralded single

  20. Frequency scaling of linear super-colliders

    International Nuclear Information System (INIS)

    Mondelli, A.; Chernin, D.; Drobot, A.; Reiser, M.; Granatstein, V.

    1986-06-01

    The development of electron-positron linear colliders in the TeV energy range will be facilitated by the development of high-power rf sources at frequencies above 2856 MHz. Present S-band technology, represented by the SLC, would require a length in excess of 50 km per linac to accelerate particles to energies above 1 TeV. By raising the rf driving frequency, the rf breakdown limit is increased, thereby allowing the length of the accelerators to be reduced. Currently available rf power sources set the realizable gradient limit in an rf linac at frequencies above S-band. This paper presents a model for the frequency scaling of linear colliders, with luminosity scaled in proportion to the square of the center-of-mass energy. Since wakefield effects are the dominant deleterious effect, a separate single-bunch simulation model is described which calculates the evolution of the beam bunch with specified wakefields, including the effects of using programmed phase positioning and Landau damping. The results presented here have been obtained for a SLAC structure, scaled in proportion to wavelength

  1. Linear circuit theory matrices in computer applications

    CERN Document Server

    Vlach, Jiri

    2014-01-01

    Basic ConceptsNodal and Mesh AnalysisMatrix MethodsDependent SourcesNetwork TransformationsCapacitors and InductorsNetworks with Capacitors and InductorsFrequency DomainLaplace TransformationTime DomainNetwork FunctionsActive NetworksTwo-PortsTransformersModeling and Numerical MethodsSensitivitiesModified Nodal FormulationFourier Series and TransformationAppendix: Scaling of Linear Networks.

  2. Topics in linear optical quantum computation

    Science.gov (United States)

    Glancy, Scott Charles

    This thesis covers several topics in optical quantum computation. A quantum computer is a computational device which is able to manipulate information by performing unitary operations on some physical system whose state can be described as a vector (or mixture of vectors) in a Hilbert space. The basic unit of information, called the qubit, is considered to be a system with two orthogonal states, which are assigned logical values of 0 and 1. Photons make excellent candidates to serve as qubits. They have little interactions with the environment. Many operations can be performed using very simple linear optical devices such as beam splitters and phase shifters. Photons can easily be processed through circuit-like networks. Operations can be performed in very short times. Photons are ideally suited for the long-distance communication of quantum information. The great difficulty in constructing an optical quantum computer is that photons naturally interact weakly with one another. This thesis first gives a brief review of two early approaches to optical quantum computation. It will describe how any discrete unitary operation can be performed using a single photon and a network of beam splitters, and how the Kerr effect can be used to construct a two photon logic gate. Second, this work provides a thorough introduction to the linear optical quantum computer developed by Knill, Laflamme, and Milburn. It then presents this author's results on the reliability of this scheme when implemented using imperfect photon detectors. This author finds that quantum computers of this sort cannot be built using current technology. Third, this dissertation describes a method for constructing a linear optical quantum computer using nearly orthogonal coherent states of light as the qubits. It shows how a universal set of logic operations can be performed, including calculations of the fidelity with which these operations may be accomplished. It discusses methods for reducing and

  3. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  4. Extreme Scale Computing Studies

    Science.gov (United States)

    2010-12-01

    systems that would fall under the Exascale rubric . In this chapter, we first discuss the attributes by which achievement of the label “Exascale” may be...Carrington, and E. Strohmaier. A Genetic Algorithms Approach to Modeling the Performance of Memory-bound Computations. Reno, NV, November 2007. ACM/IEEE... genetic stochasticity (random mating, mutation, etc). Outcomes are thus stochastic as well, and ecologists wish to ask questions like, “What is the

  5. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  6. Parallel computation for solving the tridiagonal linear system of equations

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Fujii, Minoru; Fujimura, Toichiro; Nakamura, Yasuhiro; Nanba, Katsumi.

    1981-09-01

    Recently, applications of parallel computation for scientific calculations have increased from the need of the high speed calculation of large scale programs. At the JAERI computing center, an array processor FACOM 230-75 APU has installed to study the applicability of parallel computation for nuclear codes. We made some numerical experiments by using the APU on the methods of solution of tridiagonal linear equation which is an important problem in scientific calculations. Referring to the recent papers with parallel methods, we investigate eight ones. These are Gauss elimination method, Parallel Gauss method, Accelerated parallel Gauss method, Jacobi method, Recursive doubling method, Cyclic reduction method, Chebyshev iteration method, and Conjugate gradient method. The computing time and accuracy were compared among the methods on the basis of the numerical experiments. As the result, it is found that the Cyclic reduction method is best both in computing time and accuracy and the Gauss elimination method is the second one. (author)

  7. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  8. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  9. Computer codes for designing proton linear accelerators

    International Nuclear Information System (INIS)

    Kato, Takao

    1992-01-01

    Computer codes for designing proton linear accelerators are discussed from the viewpoint of not only designing but also construction and operation of the linac. The codes are divided into three categories according to their purposes: 1) design code, 2) generation and simulation code, and 3) electric and magnetic fields calculation code. The role of each category is discussed on the basis of experience at KEK (the design of the 40-MeV proton linac and its construction and operation, and the design of the 1-GeV proton linac). We introduce our recent work relevant to three-dimensional calculation and supercomputer calculation: 1) tuning of MAFIA (three-dimensional electric and magnetic fields calculation code) for supercomputer, 2) examples of three-dimensional calculation of accelerating structures by MAFIA, 3) development of a beam transport code including space charge effects. (author)

  10. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  11. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  12. Parameter Scaling in Non-Linear Microwave Tomography

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar

    2012-01-01

    Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....

  13. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  14. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  15. Computing with linear equations and matrices

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1983-01-01

    Systems of linear equations and matrices arise in many disciplines. The equations may accurately represent conditions satisfied by a system or, more likely, provide an approximation to a more complex system of non-linear or differential equations. The system may involve a few or many thousand unknowns and each individual equation may involve few or many of them. Over the past 50 years a vast literature on methods for solving systems of linear equations and the associated problems of finding the inverse or eigenvalues of a matrix has been produced. These lectures cover those methods which have been found to be most useful for dealing with such types of problem. References are given where appropriate and attention is drawn to the possibility of improved methods for use on vector and parallel processors. (orig.)

  16. Asynchronous Multiparty Computation with Linear Communication ...

    Indian Academy of Sciences (India)

    ARPITA PATRA

    2013-05-22

    May 22, 2013 ... MPC offers more than Traditional Crypto! > MPC goes BEYOND traditional Crypto. > Models the distributed computing applications that simultaneously demands usability and privacy of sensitive data ...

  17. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  18. Aether: leveraging linear programming for optimal cloud computing in genomics.

    Science.gov (United States)

    Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J; Kostic, Aleksandar D

    2018-05-01

    Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users' existing HPC pipelines. Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu. Supplementary data are available at Bioinformatics online.

  19. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  20. Small-scale quantum information processing with linear optics

    International Nuclear Information System (INIS)

    Bergou, J.A.; Steinberg, A.M.; Mohseni, M.

    2005-01-01

    Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre

  1. The RANDOM computer program: A linear congruential random number generator

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  2. On computation of Groebner bases for linear difference systems

    Energy Technology Data Exchange (ETDEWEB)

    Gerdt, Vladimir P. [Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)]. E-mail: gerdt@jinr.ru

    2006-04-01

    In this paper, we present an algorithm for computing Groebner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions.

  3. On computation of Groebner bases for linear difference systems

    International Nuclear Information System (INIS)

    Gerdt, Vladimir P.

    2006-01-01

    In this paper, we present an algorithm for computing Groebner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions

  4. Mathematical models of non-linear phenomena, processes and systems: from molecular scale to planetary atmosphere

    CERN Document Server

    2013-01-01

    This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.

  5. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  6. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  7. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  8. Computer-Administered Interviews and Rating Scales

    Science.gov (United States)

    Garb, Howard N.

    2007-01-01

    To evaluate the value of computer-administered interviews and rating scales, the following topics are reviewed in the present article: (a) strengths and weaknesses of structured and unstructured assessment instruments, (b) advantages and disadvantages of computer administration, and (c) the validity and utility of computer-administered interviews…

  9. Non-Linear Interactive Stories in Computer Games

    DEFF Research Database (Denmark)

    Bangsø, Olav; Jensen, Ole Guttorm; Kocka, Tomas

    2003-01-01

    The paper introduces non-linear interactive stories (NOLIST) as a means to generate varied and interesting stories for computer games automatically. We give a compact representation of a NOLIST based on the specification of atomic stories, and show how to build an object-oriented Bayesian network...

  10. Utilizing encoding in scalable linear optics quantum computing

    International Nuclear Information System (INIS)

    Hayes, A J F; Gilchrist, A; Myers, C R; Ralph, T C

    2004-01-01

    We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success

  11. Turbulence Spreading into Linearly Stable Zone and Transport Scaling

    International Nuclear Information System (INIS)

    Hahm, T.S.; Diamond, P.H.; Lin, Z.; Itoh, K.; Itoh, S.-I.

    2003-01-01

    We study the simplest problem of turbulence spreading corresponding to the spatio-temporal propagation of a patch of turbulence from a region where it is locally excited to a region of weaker excitation, or even local damping. A single model equation for the local turbulence intensity I(x, t) includes the effects of local linear growth and damping, spatially local nonlinear coupling to dissipation and spatial scattering of turbulence energy induced by nonlinear coupling. In the absence of dissipation, the front propagation into the linearly stable zone occurs with the property of rapid progression at small t, followed by slower subdiffusive progression at late times. The turbulence radial spreading into the linearly stable zone reduces the turbulent intensity in the linearly unstable zone, and introduces an additional dependence on the rho* is always equal to rho i/a to the turbulent intensity and the transport scaling. These are in broad, semi-quantitative agreements with a number of global gyrokinetic simulation results with zonal flows and without zonal flows. The front propagation stops when the radial flux of fluctuation energy from the linearly unstable region is balanced by local dissipation in the linearly stable region

  12. Polarization properties of linearly polarized parabolic scaling Bessel beams

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Mengwen; Zhao, Daomu, E-mail: zhaodaomu@yahoo.com

    2016-10-07

    The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge. - Highlights: • We investigated the polarization properties of modified parabolic scaling Bessel beams with linear polarization. • We studied the evolution of transverse intensity profiles for the three components of these beams. • The intensity patterns of the cross polarization and longitudinal components can reflect the sign of the topological charge.

  13. Linear-scaling evaluation of the local energy in quantum Monte Carlo

    International Nuclear Information System (INIS)

    Austin, Brian; Aspuru-Guzik, Alan; Salomon-Ferrer, Romelia; Lester, William A. Jr.

    2006-01-01

    For atomic and molecular quantum Monte Carlo calculations, most of the computational effort is spent in the evaluation of the local energy. We describe a scheme for reducing the computational cost of the evaluation of the Slater determinants and correlation function for the correlated molecular orbital (CMO) ansatz. A sparse representation of the Slater determinants makes possible efficient evaluation of molecular orbitals. A modification to the scaled distance function facilitates a linear scaling implementation of the Schmidt-Moskowitz-Boys-Handy (SMBH) correlation function that preserves the efficient matrix multiplication structure of the SMBH function. For the evaluation of the local energy, these two methods lead to asymptotic linear scaling with respect to the molecule size

  14. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  15. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models......The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...

  16. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models......Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...

  17. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  18. Application of Nearly Linear Solvers to Electric Power System Computation

    Science.gov (United States)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  19. Linear Polarization Properties of Parsec-Scale AGN Jets

    Directory of Open Access Journals (Sweden)

    Alexander B. Pushkarev

    2017-12-01

    Full Text Available We used 15 GHz multi-epoch Very Long Baseline Array (VLBA polarization sensitive observations of 484 sources within a time interval 1996–2016 from the MOJAVE program, and also from the NRAO data archive. We have analyzed the linear polarization characteristics of the compact core features and regions downstream, and their changes along and across the parsec-scale active galactic nuclei (AGN jets. We detected a significant increase of fractional polarization with distance from the radio core along the jet as well as towards the jet edges. Compared to quasars, BL Lacs have a higher degree of polarization and exhibit more stable electric vector position angles (EVPAs in their core features and a better alignment of the EVPAs with the local jet direction. The latter is accompanied by a higher degree of linear polarization, suggesting that compact bright jet features might be strong transverse shocks, which enhance magnetic field regularity by compression.

  20. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  1. Reconnection Scaling Experiment (RSX): Magnetic Reconnection in Linear Geometry

    Science.gov (United States)

    Intrator, T.; Sovinec, C.; Begay, D.; Wurden, G.; Furno, I.; Werley, C.; Fisher, M.; Vermare, L.; Fienup, W.

    2001-10-01

    The linear Reconnection Scaling Experiment (RSX) at LANL is a new experiment that can create MHD relevant plasmas to look at the physics of magnetic reconnection. This experiment can scale many relevant parameters because the guns that generate the plasma and current channels do not depend on equilibrium or force balance for startup. We describe the experiment and initial electrostatic and magnetic probe data. Two parallel current channels sweep down a long plasma column and probe data accumulated over many shots gives 3D movies of magnetic reconnection. Our first data tries to define an operating regime free from kink instabilities that might otherwise confuse the data and shot repeatability. We compare this with MHD 2 fluid NIMROD simulations of the single current channel kink stability boundary for a variety of experimental conditions.

  2. Adaptive phase measurements in linear optical quantum computation

    International Nuclear Information System (INIS)

    Ralph, T C; Lund, A P; Wiseman, H M

    2005-01-01

    Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state α vertical bar 0>+β vertical bar 1> can be prepared deterministically

  3. Some computer simulations based on the linear relative risk model

    International Nuclear Information System (INIS)

    Gilbert, E.S.

    1991-10-01

    This report presents the results of computer simulations designed to evaluate and compare the performance of the likelihood ratio statistic and the score statistic for making inferences about the linear relative risk mode. The work was motivated by data on workers exposed to low doses of radiation, and the report includes illustration of several procedures for obtaining confidence limits for the excess relative risk coefficient based on data from three studies of nuclear workers. The computer simulations indicate that with small sample sizes and highly skewed dose distributions, asymptotic approximations to the score statistic or to the likelihood ratio statistic may not be adequate. For testing the null hypothesis that the excess relative risk is equal to zero, the asymptotic approximation to the likelihood ratio statistic was adequate, but use of the asymptotic approximation to the score statistic rejected the null hypothesis too often. Frequently the likelihood was maximized at the lower constraint, and when this occurred, the asymptotic approximations for the likelihood ratio and score statistics did not perform well in obtaining upper confidence limits. The score statistic and likelihood ratio statistics were found to perform comparably in terms of power and width of the confidence limits. It is recommended that with modest sample sizes, confidence limits be obtained using computer simulations based on the score statistic. Although nuclear worker studies are emphasized in this report, its results are relevant for any study investigating linear dose-response functions with highly skewed exposure distributions. 22 refs., 14 tabs

  4. Fault tolerance in parity-state linear optical quantum computing

    International Nuclear Information System (INIS)

    Hayes, A. J. F.; Ralph, T. C.; Haselgrove, H. L.; Gilchrist, Alexei

    2010-01-01

    We use a combination of analytical and numerical techniques to calculate the noise threshold and resource requirements for a linear optical quantum computing scheme based on parity-state encoding. Parity-state encoding is used at the lowest level of code concatenation in order to efficiently correct errors arising from the inherent nondeterminism of two-qubit linear-optical gates. When combined with teleported error-correction (using either a Steane or Golay code) at higher levels of concatenation, the parity-state scheme is found to achieve a saving of approximately three orders of magnitude in resources when compared to the cluster state scheme, at a cost of a somewhat reduced noise threshold.

  5. From linear optical quantum computing to Heisenberg-limited interferometry

    International Nuclear Information System (INIS)

    Lee, Hwang; Kok, Pieter; Williams, Colin P; Dowling, Jonathan P

    2004-01-01

    The working principles of linear optical quantum computing are based on photodetection, namely, projective measurements. The use of photodetection can provide efficient nonlinear interactions between photons at the single-photon level, which is technically problematic otherwise. We report an application of such a technique to prepare quantum correlations as an important resource for Heisenberg-limited optical interferometry, where the sensitivity of phase measurements can be improved beyond the usual shot-noise limit. Furthermore, using such nonlinearities, optical quantum non-demolition measurements can now be carried out easily at the single-photon level

  6. Linear optical quantum computing in a single spatial mode.

    Science.gov (United States)

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  7. ONETEP: linear-scaling density-functional theory with plane-waves

    International Nuclear Information System (INIS)

    Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C

    2006-01-01

    This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep

  8. Offset linear scaling for H-mode confinement

    International Nuclear Information System (INIS)

    Miura, Yukitoshi; Tamai, Hiroshi; Suzuki, Norio; Mori, Masahiro; Matsuda, Toshiaki; Maeda, Hikosuke; Takizuka, Tomonori; Itoh, Sanae; Itoh, Kimitaka.

    1992-01-01

    An offset linear scaling for the H-mode confinement time is examined based on single parameter scans on the JFT-2M experiment. Regression study is done for various devices with open divertor configuration such as JET, DIII-D, JFT-2M. The scaling law of the thermal energy is given in the MKSA unit as W th =0.0046R 1.9 I P 1.1 B T 0.91 √A+2.9x10 -8 I P 1.0 R 0.87 P√AP, where R is the major radius, I P is the plasma current, B T is the toroidal magnetic field, A is the average mass number of plasma and neutral beam particles, and P is the heating power. This fitting has a similar root mean square error (RMSE) compared to the power law scaling. The result is also compared with the H-mode in other configurations. The W th of closed divertor H-mode on ASDEX shows a little better values than that of open divertor H-mode. (author)

  9. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  10. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    Science.gov (United States)

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  11. Linear Text vs. Non-Linear Hypertext in Handheld Computers: Effects on Declarative and Structural Knowledge, and Learner Motivation

    Science.gov (United States)

    Son, Chanhee; Park, Sanghoon; Kim, Minjeong

    2011-01-01

    This study compared linear text-based and non-linear hypertext-based instruction in a handheld computer regarding effects on two different levels of knowledge (declarative and structural knowledge) and learner motivation. Forty four participants were randomly assigned to one of three experimental conditions: linear text, hierarchical hypertext,…

  12. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  13. Recent development of linear scaling quantum theories in GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Cheol Ho [Kyungpook National Univ., Daegu (Korea, Republic of)

    2003-06-01

    Linear scaling quantum theories are reviewed especially focusing on the method adopted in GAMESS. The three key translation equations of the fast multipole method (FMM) are deduced from the general polypolar expansions given earlier by Steinborn and Rudenberg. Simplifications are introduced for the rotation-based FMM that lead to a very compact FMM formalism. The OPS (optimum parameter searching) procedure, a stable and efficient way of obtaining the optimum set of FMM parameters, is established with complete control over the tolerable error {epsilon}. In addition, a new parallel FMM algorithm requiring virtually no inter-node communication, is suggested which is suitable for the parallel construction of Fock matrices in electronic structure calculations.

  14. Scaling laws for e+/e- linear colliders

    International Nuclear Information System (INIS)

    Delahaye, J.P.; Guignard, G.; Raubenheimer, T.; Wilson, I.

    1999-01-01

    Design studies of a future TeV e + e - Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake fields with frequency, the single-bunch emittance blow-up during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high-frequency structures becomes very advantageous because it enables high accelerating fields to be obtained, which reduces the overall length and consequently the total cost of the linac. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  15. Parallel computation of transverse wakes in linear colliders

    International Nuclear Information System (INIS)

    Zhan, Xiaowei; Ko, Kwok.

    1996-11-01

    SLAC has proposed the detuned structure (DS) as one possible design to control the emittance growth of long bunch trains due to transverse wakefields in the Next Linear Collider (NLC). The DS consists of 206 cells with tapering from cell to cell of the order of few microns to provide Gaussian detuning of the dipole modes. The decoherence of these modes leads to two orders of magnitude reduction in wakefield experienced by the trailing bunch. To model such a large heterogeneous structure realistically is impractical with finite-difference codes using structured grids. The authors have calculated the wakefield in the DS on a parallel computer with a finite-element code using an unstructured grid. The parallel implementation issues are presented along with simulation results that include contributions from higher dipole bands and wall dissipation

  16. A Computer-Based Visual Analog Scale,

    Science.gov (United States)

    1992-06-01

    34 keys on the computer keyboard or other input device. The initial position of the arrow is always in the center of the scale to prevent biasing the...3 REFERENCES 1. Gift, A.G., "Visual Analogue Scales: Measurement of Subjective Phenomena." Nursing Research, Vol. 38, pp. 286-288, 1989. 2. Ltmdberg...3. Menkes, D.B., Howard, R.C., Spears, G.F., and Cairns, E.R., "Salivary THC Following Cannabis Smoking Correlates With Subjective Intoxication and

  17. Computer simulations for the nano-scale

    International Nuclear Information System (INIS)

    Stich, I.

    2007-01-01

    A review of methods for computations for the nano-scale is presented. The paper should provide a convenient starting point into computations for the nano-scale as well as a more in depth presentation for those already working in the field of atomic/molecular-scale modeling. The argument is divided in chapters covering the methods for description of the (i) electrons, (ii) ions, and (iii) techniques for efficient solving of the underlying equations. A fairly broad view is taken covering the Hartree-Fock approximation, density functional techniques and quantum Monte-Carlo techniques for electrons. The customary quantum chemistry methods, such as post Hartree-Fock techniques, are only briefly mentioned. Description of both classical and quantum ions is presented. The techniques cover Ehrenfest, Born-Oppenheimer, and Car-Parrinello dynamics. The strong and weak points of both principal and technical nature are analyzed. In the second part we introduce a number of applications to demonstrate the different approximations and techniques introduced in the first part. They cover a wide range of applications such as non-simple liquids, surfaces, molecule-surface interactions, applications in nano technology, etc. These more in depth presentations, while certainly not exhaustive, should provide information on technical aspects of the simulations, typical parameters used, and ways of analysis of the huge amounts of data generated in these large-scale supercomputer simulations. (author)

  18. Extreme Scale Computing to Secure the Nation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

    2009-11-10

    absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This

  19. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    Science.gov (United States)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  20. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  1. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    DEFF Research Database (Denmark)

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric

    2004-01-01

    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  2. Computation of Normal Conducting and Superconducting Linear Accelerator (LINAC) Availabilities

    International Nuclear Information System (INIS)

    Haire, M.J.

    2000-01-01

    A brief study was conducted to roughly estimate the availability of a superconducting (SC) linear accelerator (LINAC) as compared to a normal conducting (NC) one. Potentially, SC radio frequency cavities have substantial reserve capability, which allows them to compensate for failed cavities, thus increasing the availability of the overall LINAC. In the initial SC design, there is a klystron and associated equipment (e.g., power supply) for every cavity of an SC LINAC. On the other hand, a single klystron may service eight cavities in the NC LINAC. This study modeled that portion of the Spallation Neutron Source LINAC (between 200 and 1,000 MeV) that is initially proposed for conversion from NC to SC technology. Equipment common to both designs was not evaluated. Tabular fault-tree calculations and computer-event-driven simulation (EDS) computer computations were performed. The estimated gain in availability when using the SC option ranges from 3 to 13% under certain equipment and conditions and spatial separation requirements. The availability of an NC LINAC is estimated to be 83%. Tabular fault-tree calculations and computer EDS modeling gave the same 83% answer to within one-tenth of a percent for the NC case. Tabular fault-tree calculations of the availability of the SC LINAC (where a klystron and associated equipment drive a single cavity) give 97%, whereas EDS computer calculations give 96%, a disagreement of only 1%. This result may be somewhat fortuitous because of limitations of tabular fault-tree calculations. For example, tabular fault-tree calculations can not handle spatial effects (separation distance between failures), equipment network configurations, and some failure combinations. EDS computer modeling of various equipment configurations were examined. When there is a klystron and associated equipment for every cavity and adjacent cavity, failure can be tolerated and the SC availability was estimated to be 96%. SC availability decreased as

  3. Grey scale, the 'crispening effect', and perceptual linearization

    NARCIS (Netherlands)

    Belaïd, N.; Martens, J.B.

    1998-01-01

    One way of optimizing a display is to maximize the number of distinguishable grey levels, which in turn is equivalent to perceptually linearizing the display. Perceptual linearization implies that equal steps in grey value evoke equal steps in brightness sensation. The key to perceptual

  4. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    Energy Technology Data Exchange (ETDEWEB)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2015-11-14

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.

  5. X-BAND LINEAR COLLIDER R and D IN ACCELERATING STRUCTURES THROUGH ADVANCED COMPUTING

    International Nuclear Information System (INIS)

    Li, Z

    2004-01-01

    This paper describes a major computational effort that addresses key design issues in the high gradient accelerating structures for the proposed X-band linear collider, GLC/NLC. Supported by the US DOE's Accelerator Simulation Project, SLAC is developing a suite of parallel electromagnetic codes based on unstructured grids for modeling RF structures with higher accuracy and on a scale previously not possible. The new simulation tools have played an important role in the R and D of X-Band accelerating structures, in cell design, wakefield analysis and dark current studies

  6. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  7. Developing ontological model of computational linear algebra - preliminary considerations

    Science.gov (United States)

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Lirkov, I.

    2013-10-01

    The aim of this paper is to propose a method for application of ontologically represented domain knowledge to support Grid users. The work is presented in the context provided by the Agents in Grid system, which aims at development of an agent-semantic infrastructure for efficient resource management in the Grid. Decision support within the system should provide functionality beyond the existing Grid middleware, specifically, help the user to choose optimal algorithm and/or resource to solve a problem from a given domain. The system assists the user in at least two situations. First, for users without in-depth knowledge about the domain, it should help them to select the method and the resource that (together) would best fit the problem to be solved (and match the available resources). Second, if the user explicitly indicates the method and the resource configuration, it should "verify" if her choice is consistent with the expert recommendations (encapsulated in the knowledge base). Furthermore, one of the goals is to simplify the use of the selected resource to execute the job; i.e., provide a user-friendly method of submitting jobs, without required technical knowledge about the Grid middleware. To achieve the mentioned goals, an adaptable method of expert knowledge representation for the decision support system has to be implemented. The selected approach is to utilize ontologies and semantic data processing, supported by multicriterial decision making. As a starting point, an area of computational linear algebra was selected to be modeled, however, the paper presents a general approach that shall be easily extendable to other domains.

  8. Linear-scaling implementation of the direct random-phase approximation

    International Nuclear Information System (INIS)

    Kállay, Mihály

    2015-01-01

    We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order Møller–Plesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10 000 basis functions on a single processor

  9. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  10. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  11. Scaling linear colliders to 5 TeV and above

    International Nuclear Information System (INIS)

    Wilson, P.B.

    1997-04-01

    Detailed designs exist at present for linear colliders in the 0.5-1.0 TeV center-of-mass energy range. For linear colliders driven by discrete rf sources (klystrons), the rf operating frequencies range from 1.3 GHz to 14 GHz, and the unloaded accelerating gradients from 21 MV/m to 100 MV/m. Except for the collider design at 1.3 GHz (TESLA) which uses superconducting accelerating structures, the accelerating gradients vary roughly linearly with the rf frequency. This correlation between gradient and frequency follows from the necessity to keep the ac open-quotes wall plugclose quotes power within reasonable bounds. For linear colliders at energies of 5 TeV and above, even higher accelerating gradients and rf operating frequencies will be required if both the total machine length and ac power are to be kept within reasonable limits. An rf system for a 5 TeV collider operating at 34 GHz is outlined, and it is shown that there are reasonable candidates for microwave tube sources which, together with rf pulse compression, are capable of supplying the required rf power. Some possibilities for a 15 TeV collider at 91 GHz are briefly discussed

  12. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  13. An {Mathematical expression} iteration bound primal-dual cone affine scaling algorithm for linear programmingiteration bound primal-dual cone affine scaling algorithm for linear programming

    NARCIS (Netherlands)

    J.F. Sturm; J. Zhang (Shuzhong)

    1996-01-01

    textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993;

  14. A non-linear programming approach to the computer-aided design of regulators using a linear-quadratic formulation

    Science.gov (United States)

    Fleming, P.

    1985-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.

  15. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  16. On the non-linear scale of cosmological perturbation theory

    CERN Document Server

    Blas, Diego; Konstandin, Thomas

    2013-01-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  17. On the non-linear scale of cosmological perturbation theory

    International Nuclear Information System (INIS)

    Blas, Diego; Garny, Mathias; Konstandin, Thomas

    2013-04-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  18. On the non-linear scale of cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-04-15

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  19. Scale-dependent three-dimensional charged black holes in linear and non-linear electrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Rincon, Angel; Koch, Benjamin [Pontificia Universidad Catolica de Chile, Instituto de Fisica, Santiago (Chile); Contreras, Ernesto; Bargueno, Pedro; Hernandez-Arboleda, Alejandro [Universidad de los Andes, Departamento de Fisica, Bogota, Distrito Capital (Colombia); Panotopoulos, Grigorios [Universidade de Lisboa, CENTRA, Instituto Superior Tecnico, Lisboa (Portugal)

    2017-07-15

    In the present work we study the scale dependence at the level of the effective action of charged black holes in Einstein-Maxwell as well as in Einstein-power-Maxwell theories in (2 + 1)-dimensional spacetimes without a cosmological constant. We allow for scale dependence of the gravitational and electromagnetic couplings, and we solve the corresponding generalized field equations imposing the null energy condition. Certain properties, such as horizon structure and thermodynamics, are discussed in detail. (orig.)

  20. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  1. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  2. Non-linear variability in geophysics scaling and fractals

    CERN Document Server

    Lovejoy, S

    1991-01-01

    consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...

  3. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  4. A depth-first search algorithm to compute elementary flux modes by linear programming.

    Science.gov (United States)

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  5. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  6. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  7. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    International Nuclear Information System (INIS)

    Tait, E W; Payne, M C; Ratcliff, L E; Haynes, P D; Hine, N D M

    2016-01-01

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable. (paper)

  8. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  9. Computer Based Dose Control System on Linear Accelerator

    International Nuclear Information System (INIS)

    Taxwim; Djoko-SP; Widi-Setiawan; Agus-Budi Wiyatna

    2000-01-01

    The accelerator technology has been used for radio therapy. DokterKaryadi Hospital in Semarang use electron or X-ray linear accelerator (Linac)for cancer therapy. One of the control parameter of linear accelerator isdose rate. It is particle current or amount of photon rate to the target. Thecontrol of dose rate in linac have been done by adjusting repetition rate ofanode pulse train of electron source. Presently the control is stillproportional control. To enhance the quality of the control result (minimalstationer error, velocity and stability), the dose control system has beendesigned by using the PID (Proportional Integral Differential) controlalgorithm and the derivation of transfer function of control object.Implementation of PID algorithm control system is done by giving an input ofdose error (the different between output dose and dose rate set point). Theoutput of control system is used for correction of repetition rate set pointfrom pulse train of electron source anode. (author)

  10. Computer software for linear and nonlinear regression in organic NMR

    International Nuclear Information System (INIS)

    Canto, Eduardo Leite do; Rittner, Roberto

    1991-01-01

    Calculation involving two variable linear regressions, require specific procedures generally not familiar to chemist. For attending the necessity of fast and efficient handling of NMR data, a self explained and Pc portable software has been developed, which allows user to produce and use diskette recorded tables, containing chemical shift or any other substituent physical-chemical measurements and constants (σ T , σ o R , E s , ...)

  11. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  12. A general algorithm for computing distance transforms in linear time

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS

    2000-01-01

    A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the

  13. Linear systems solvers - recent developments and implications for lattice computations

    International Nuclear Information System (INIS)

    Frommer, A.

    1996-01-01

    We review the numerical analysis' understanding of Krylov subspace methods for solving (non-hermitian) systems of equations and discuss its implications for lattice gauge theory computations using the example of the Wilson fermion matrix. Our thesis is that mature methods like QMR, BiCGStab or restarted GMRES are close to optimal for the Wilson fermion matrix. Consequently, preconditioning appears to be the crucial issue for further improvements. (orig.)

  14. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  15. Computed tomography appearances in the linear sebaceous naevus syndrome

    International Nuclear Information System (INIS)

    Levin, S.; Robinson, R.O.; Aicardi, J.; Hoare, R.D.

    1984-01-01

    The appearance of C.T. of the head in 11 cases of the linear sebaceous naevus syndrome are presented. From these it is argued that the pathology represents a disorder of neuronal migration and organisation rather than differentiation. Since three of the cases had evidence of CNS tumour in addition to two in the literature, it would appear that a further aspect of this disorder is a failure of growth constraint. This is paralleled by a similar tendency in the skin lesion which should be considered premalignant. Normal intelligence is rare in this condition and co-exists only with a normal C.T. scan. The early onset of fits is not necessarily associated with subsequent mental handicap. (orig.)

  16. Computation of magnetic field in DC brushless linear motors built with NdFeB magnets

    International Nuclear Information System (INIS)

    Basak, A.; Shirkoohi, G.H.

    1990-01-01

    A software package based on finite element technique has been used to compute three-dimensional magnetic fields and static forces developed in brushless d.c. linear motors. As the field flux-source two different types of permanent magnets, one of them being the high energy neodymium- iron-boron type, has been used in computer models. Motors with the same specifications as the computer models were built and experimental results obtained from them are compared with the computed results

  17. Scaling ion traps for quantum computing

    CSIR Research Space (South Africa)

    Uys, H

    2010-09-01

    Full Text Available The design, fabrication and preliminary testing of a chipscale, multi-zone, surface electrode ion trap is reported. The modular design and fabrication techniques used are anticipated to advance scalability of ion trap quantum computing architectures...

  18. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    International Nuclear Information System (INIS)

    Gene Golub; Kwok Ko

    2009-01-01

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  19. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  20. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  1. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    Science.gov (United States)

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  3. The role of dendritic non-linearities in single neuron computation

    Directory of Open Access Journals (Sweden)

    Boris Gutkin

    2014-05-01

    Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.

  4. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  5. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  6. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  7. A Maple package for computing Groebner bases for linear recurrence relations

    International Nuclear Information System (INIS)

    Gerdt, Vladimir P.; Robertz, Daniel

    2006-01-01

    A Maple package for computing Groebner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type

  8. A Maple package for computing Groebner bases for linear recurrence relations

    Energy Technology Data Exchange (ETDEWEB)

    Gerdt, Vladimir P. [Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)]. E-mail: gerdt@jinr.ru; Robertz, Daniel [Lehrstuhl B fuer Mathematik, RWTH Aachen, Templergraben 64, D-52062 Aachen (Germany)]. E-mail: daniel@momo.math.rwth-aachen.de

    2006-04-01

    A Maple package for computing Groebner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.

  9. Comparison of computer codes for evaluation of double-supply-frequency pulsations in linear induction pumps

    International Nuclear Information System (INIS)

    Kirillov, Igor R.; Obukhov, Denis M.; Ogorodnikov, Anatoly P.; Araseki, Hideo

    2004-01-01

    The paper describes and compares three computer codes that are able to estimate the double-supply-frequency (DSF) pulsations in annular linear induction pumps (ALIPs). The DSF pulsations are the result of interaction of the magnetic field and induced in liquid metal currents both changing with supply-frequency. They may be of some concern for electromagnetic pumps (EMP) exploitation and need to be evaluated at their design. The results of computer simulation are compared with experimental ones for annular linear induction pump ALIP-1

  10. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry; Kasimov, Aslan R.

    2018-01-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  11. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry I.

    2017-12-08

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  12. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry

    2018-03-20

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  13. No-go theorem for passive single-rail linear optical quantum computing.

    Science.gov (United States)

    Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A

    2013-01-01

    Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.

  14. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  15. Hardy inequality on time scales and its application to half-linear dynamic equations

    Directory of Open Access Journals (Sweden)

    Řehák Pavel

    2005-01-01

    Full Text Available A time-scale version of the Hardy inequality is presented, which unifies and extends well-known Hardy inequalities in the continuous and in the discrete setting. An application in the oscillation theory of half-linear dynamic equations is given.

  16. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  17. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  18. Efficient Computation of Multiscale Entropy over Short Biomedical Time Series Based on Linear State-Space Models

    Directory of Open Access Journals (Sweden)

    Luca Faes

    2017-01-01

    Full Text Available The most common approach to assess the dynamical complexity of a time series across multiple temporal scales makes use of the multiscale entropy (MSE and refined MSE (RMSE measures. In spite of their popularity, MSE and RMSE lack an analytical framework allowing their calculation for known dynamic processes and cannot be reliably computed over short time series. To overcome these limitations, we propose a method to assess RMSE for autoregressive (AR stochastic processes. The method makes use of linear state-space (SS models to provide the multiscale parametric representation of an AR process observed at different time scales and exploits the SS parameters to quantify analytically the complexity of the process. The resulting linear MSE (LMSE measure is first tested in simulations, both theoretically to relate the multiscale complexity of AR processes to their dynamical properties and over short process realizations to assess its computational reliability in comparison with RMSE. Then, it is applied to the time series of heart period, arterial pressure, and respiration measured for healthy subjects monitored in resting conditions and during physiological stress. This application to short-term cardiovascular variability documents that LMSE can describe better than RMSE the activity of physiological mechanisms producing biological oscillations at different temporal scales.

  19. Scilab software as an alternative low-cost computing in solving the linear equations problem

    Science.gov (United States)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  20. Linear arrangement of nano-scale magnetic particles formed in Cu-Fe-Ni alloys

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung, E-mail: k3201s@hotmail.co [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeda, Mahoto [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeguchi, Masaki [Advanced Electron Microscopy Group, National Institute for Materials Science (NIMS), Sakura 3-13, Tsukuba, 305-0047 (Japan); Bae, Dong-Sik [School of Nano and Advanced Materials Engineering, Changwon National University, Gyeongnam, 641-773 (Korea, Republic of)

    2010-04-30

    The structural evolution of nano-scale magnetic particles formed in Cu-Fe-Ni alloys on isothermal annealing at 878 K has been investigated by means of transmission electron microscopy (TEM), electron dispersive X-ray spectroscopy (EDS), electron energy-loss spectroscopy (EELS) and field-emission scanning electron microscopy (FE-SEM). Phase decomposition of Cu-Fe-Ni occurred after an as-quenched specimen received a short anneal, and nano-scale magnetic particles were formed randomly in the Cu-rich matrix. A striking feature that two or more nano-scale particles with a cubic shape were aligned linearly along <1,0,0> directions was observed, and the trend was more pronounced at later stages of the precipitation. Large numbers of <1,0,0> linear chains of precipitates extended in three dimensions in late stages of annealing.

  1. Deterministic linear-optics quantum computing based on a hybrid approach

    International Nuclear Information System (INIS)

    Lee, Seung-Woo; Jeong, Hyunseok

    2014-01-01

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources

  2. Deterministic linear-optics quantum computing based on a hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)

    2014-12-04

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.

  3. Solution of the Schrodinger Equation for a Diatomic Oscillator Using Linear Algebra: An Undergraduate Computational Experiment

    Science.gov (United States)

    Gasyna, Zbigniew L.

    2008-01-01

    Computational experiment is proposed in which a linear algebra method is applied to the solution of the Schrodinger equation for a diatomic oscillator. Calculations of the vibration-rotation spectrum for the HCl molecule are presented and the results show excellent agreement with experimental data. (Contains 1 table and 1 figure.)

  4. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    Science.gov (United States)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  5. Three-point phase correlations: A new measure of non-linear large-scale structure

    CERN Document Server

    Wolstenhulme, Richard; Obreschkow, Danail

    2015-01-01

    We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the non-linear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F_2, which governs the non-linear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a very good agreement for separations r>20 Mpc/h. Fitting formulae for the power spectrum and the non-linear coupling kernel at small scales allow us to extend our prediction into the strongly non-linear regime. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the linear bias. Furtherm...

  6. On the interaction of small-scale linear waves with nonlinear solitary waves

    Science.gov (United States)

    Xu, Chengzhu; Stastna, Marek

    2017-04-01

    In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow

  7. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  8. Fast computation of the Maslov index for hyperbolic linear systems with periodic coefficients

    International Nuclear Information System (INIS)

    Chardard, F; Dias, F; Bridges, T J

    2006-01-01

    The Maslov index is a topological property of periodic orbits of finite-dimensional Hamiltonian systems that is widely used in semiclassical quantization, quantum chaology, stability of waves and classical mechanics. The Maslov index is determined from the analysis of a linear Hamiltonian system with periodic coefficients. In this paper, a numerical scheme is devised to compute the Maslov index for hyperbolic linear systems when the phase space has a low dimension. The idea is to compute on the exterior algebra of the ambient vector space, where the Lagrangian subspace representing the unstable subspace is reduced to a line. When the exterior algebra is projectified the Lagrangian subspace always forms a closed loop. The idea is illustrated by application to Hamiltonian systems on a phase space of dimension 4. The theory is used to compute the Maslov index for the spectral problem associated with periodic solutions of the fifth-order Korteweg de Vries equation

  9. Extreme Scale Computing for First-Principles Plasma Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Choogn-Seock [Princeton University

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  10. Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations

    Directory of Open Access Journals (Sweden)

    Matt Challacombe

    2014-03-01

    Full Text Available A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B. Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3 carbon nanotube segment.

  11. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  12. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  13. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  14. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  15. A general digital computer procedure for synthesizing linear automatic control systems

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1961-10-01

    The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)

  16. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  17. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  18. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  19. Optimization and large scale computation of an entropy-based moment closure

    Science.gov (United States)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  20. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  1. Computer-aided design studies of the homopolar linear synchronous motor

    Science.gov (United States)

    Dawson, G. E.; Eastham, A. R.; Ong, R.

    1984-09-01

    The linear induction motor (LIM), as an urban transit drive, can provide good grade-climbing capabilities and propulsion/braking performance that is independent of steel wheel-rail adhesion. In view of its 10-12 mm airgap, the LIM is characterized by a low power factor-efficiency product of order 0.4. A synchronous machine offers high efficiency and controllable power factor. An assessment of the linear homopolar configuration of this machine is presented as an alternative to the LIM. Computer-aided design studies using the finite element technique have been conducted to identify a suitable machine design for urban transit propulsion.

  2. Self-consistent field theory based molecular dynamics with linear system-size scaling

    Energy Technology Data Exchange (ETDEWEB)

    Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)

    2014-04-07

    We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.

  3. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  4. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  5. 3D artefact for concurrent scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises a carbon fibre tubular structure on which a number of reference ruby spheres are glued. The artefact is positioned and scanned together with the workpiece inside the CT scanner...

  6. Communication: An effective linear-scaling atomic-orbital reformulation of the random-phase approximation using a contracted double-Laplace transformation

    International Nuclear Information System (INIS)

    Schurkus, Henry F.; Ochsenfeld, Christian

    2016-01-01

    An atomic-orbital (AO) reformulation of the random-phase approximation (RPA) correlation energy is presented allowing to reduce the steep computational scaling to linear, so that large systems can be studied on simple desktop computers with fully numerically controlled accuracy. Our AO-RPA formulation introduces a contracted double-Laplace transform and employs the overlap-metric resolution-of-the-identity. First timings of our pilot code illustrate the reduced scaling with systems comprising up to 1262 atoms and 10 090 basis functions. 

  7. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    Science.gov (United States)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  8. Effect of Variable Spatial Scales on USLE-GIS Computations

    Science.gov (United States)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  9. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  10. Final Report for 'Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing'

    International Nuclear Information System (INIS)

    Vadlamani, Srinath; Kruger, Scott; Austin, Travis

    2008-01-01

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.

  11. Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2011-01-01

    Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011

  12. Can Dictionary-based Computational Models Outperform the Best Linear Ones?

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2011-01-01

    Roč. 24, č. 8 (2011), s. 881-887 ISSN 0893-6080 R&D Project s: GA MŠk OC10047 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary-based approximation * linear approximation * rates of approximation * worst-case error * Kolmogorov width * perceptron networks Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011

  13. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  14. Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations

    International Nuclear Information System (INIS)

    Allaire, G.

    1995-01-01

    FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs

  15. Proceedings of the conference on computer codes and the linear accelerator community

    International Nuclear Information System (INIS)

    Cooper, R.K.

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned

  16. Proceedings of the conference on computer codes and the linear accelerator community

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  17. Recent advances toward a general purpose linear-scaling quantum force field.

    Science.gov (United States)

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  18. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  19. DISTING: A web application for fast algorithmic computation of alternative indistinguishable linear compartmental models.

    Science.gov (United States)

    Davidson, Natalie R; Godfrey, Keith R; Alquaddoomi, Faisal; Nola, David; DiStefano, Joseph J

    2017-05-01

    We describe and illustrate use of DISTING, a novel web application for computing alternative structurally identifiable linear compartmental models that are input-output indistinguishable from a postulated linear compartmental model. Several computer packages are available for analysing the structural identifiability of such models, but DISTING is the first to be made available for assessing indistinguishability. The computational algorithms embedded in DISTING are based on advanced versions of established geometric and algebraic properties of linear compartmental models, embedded in a user-friendly graphic model user interface. Novel computational tools greatly speed up the overall procedure. These include algorithms for Jacobian matrix reduction, submatrix rank reduction, and parallelization of candidate rank computations in symbolic matrix analysis. The application of DISTING to three postulated models with respectively two, three and four compartments is given. The 2-compartment example is used to illustrate the indistinguishability problem; the original (unidentifiable) model is found to have two structurally identifiable models that are indistinguishable from it. The 3-compartment example has three structurally identifiable indistinguishable models. It is found from DISTING that the four-compartment example has five structurally identifiable models indistinguishable from the original postulated model. This example shows that care is needed when dealing with models that have two or more compartments which are neither perturbed nor observed, because the numbering of these compartments may be arbitrary. DISTING is universally and freely available via the Internet. It is easy to use and circumvents tedious and complicated algebraic analysis previously done by hand. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Development of a computational algorithm for the linearization of decay and transmutation chains

    International Nuclear Information System (INIS)

    Cruz L, C. A.; Francois L, J. L.

    2017-09-01

    One of the most used methodologies to solve Bate man equations, in the problem of burning, is the Tta (Transmutation Trajectory Analysis) method. In this method, a network of decays is broken down into linear elements known as trajectories, through a process known as linearization. In this work an alternative algorithm is shown to find and construct these trajectories, which considers three aspects of linearization: the information -a priori- about the elements that make up decay and transmutation network, the use of a new notation, and in the functions for the treatment of text strings (which are common in most programming languages). One of the main advantages of the algorithm is that can condense the information of a decay and transmutation network into only two vectors. From these is possible to determine how many linear chains can be extracted from the network and even their length (in the case they are not cyclical). Unlike the Deep First Search method, which is widely used for the linearization process, the method proposed in the present work does not have a backward routine and instead occupies a process of compilation, since completes fragments chain instead of going back to the beginning of the trajectories. The developed algorithm can be applied in a general way to the information search and to the linearization of the computational data structures known as trees. It can also be applied to engineering problems where one seeks to calculate the concentration of some substance as a function of time, starting from linear differential equations of balance. (Author)

  1. A Non-Linear Digital Computer Model Requiring Short Computation Time for Studies Concerning the Hydrodynamics of the BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reisch, F; Vayssier, G

    1969-05-15

    This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.

  2. High-speed linear optics quantum computing using active feed-forward.

    Science.gov (United States)

    Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton

    2007-01-04

    As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.

  3. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    Science.gov (United States)

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Linear and Nonlinear Optical Properties of Micrometer-Scale Gold Nanoplates

    International Nuclear Information System (INIS)

    Liu Xiao-Lan; Peng Xiao-Niu; Yang Zhong-Jian; Li Min; Zhou Li

    2011-01-01

    Micrometer-scale gold nanoplates have been synthesized in high yield through a polyol process. The morphology, crystal structure and linear optical extinction of the gold nanoplates have been characterized. These gold nanoplates are single-crystalline with triangular, truncated triangular and hexagonal shapes, exhibiting strong surface plasmon resonance (SPR) extinction in the visible and near-infrared (NIR) region. The linear optical properties of gold nanoplates are also investigated by theoretical calculations. We further investigate the nonlinear optical properties of the gold nanoplates in solution by Z-scan technique. The nonlinear absorption (NLA) coefficient and nonlinear refraction (NLR) index are measured to be 1.18×10 2 cm/GW and −1.04×10 −3 cm 2 /GW, respectively. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  5. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms.

    Science.gov (United States)

    Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian

    2017-09-29

    Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ .

  6. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  7. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  8. Universal Linear Scaling of Permeability and Time for Heterogeneous Fracture Dissolution

    Science.gov (United States)

    Wang, L.; Cardenas, M. B.

    2017-12-01

    Fractures are dynamically changing over geological time scale due to mechanical deformation and chemical reactions. However, the latter mechanism remains poorly understood with respect to the expanding fracture, which leads to a positively coupled flow and reactive transport processes, i.e., as a fracture expands, so does its permeability (k) and thus flow and reactive transport processes. To unravel this coupling, we consider a self-enhancing process that leads to fracture expansion caused by acidic fluid, i.e., CO2-saturated brine dissolving calcite fracture. We rigorously derive a theory, for the first time, showing that fracture permeability increases linearly with time [Wang and Cardenas, 2017]. To validate this theory, we resort to the direct simulation that solves the Navier-Stokes and Advection-Diffusion equations with a moving mesh according to the dynamic dissolution process in two-dimensional (2D) fractures. We find that k slowly increases first until the dissolution front breakthrough the outbound when we observe a rapid k increase, i.e., the linear time-dependence of k occurs. The theory agrees well with numerical observations across a broad range of Peclet and Damkohler numbers through homogeneous and heterogeneous 2D fractures. Moreover, the theory of linear scaling relationship between k and time matches well with experimental observations of three-dimensional (3D) fractures' dissolution. To further attest to our theory's universality for 3D heterogeneous fractures across a broad range of roughness and correlation length of aperture field, we develop a depth-averaged model that simulates the process-based reactive transport. The simulation results show that, regardless of a wide variety of dissolution patterns such as the presence of dissolution fingers and preferential dissolution paths, the linear scaling relationship between k and time holds. Our theory sheds light on predicting permeability evolution in many geological settings when the self

  9. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  10. Real time computer control of a nonlinear Multivariable System via Linearization and Stability Analysis

    International Nuclear Information System (INIS)

    Raza, K.S.M.

    2004-01-01

    This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)

  11. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  12. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  13. A critical oscillation constant as a variable of time scales for half-linear dynamic equations

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel

    2010-01-01

    Roč. 60, č. 2 (2010), s. 237-256 ISSN 0139-9918 R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scale * half-linear equation * (non)oscillation criteria * Hille-Nehari criteria * Kneser criteria * critical constant * oscillation constant * Hardy inequality Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2010 http://link.springer.com/article/10.2478%2Fs12175-010-0009-7

  14. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  15. X-ray beam hardening correction for measuring density in linear accelerator industrial computed tomography

    International Nuclear Information System (INIS)

    Zhou Rifeng; Wang Jue; Chen Weimin

    2009-01-01

    Due to X-ray attenuation being approximately proportional to material density, it is possible to measure the inner density through Industrial Computed Tomography (ICT) images accurately. In practice, however, a number of factors including the non-linear effects of beam hardening and diffuse scattered radiation complicate the quantitative measurement of density variations in materials. This paper is based on the linearization method of beam hardening correction, and uses polynomial fitting coefficient which is obtained by the curvature of iron polychromatic beam data to fit other materials. Through theoretical deduction, the paper proves that the density measure error is less than 2% if using pre-filters to make the spectrum of linear accelerator range mainly 0.3 MeV to 3 MeV. Experiment had been set up at an ICT system with a 9 MeV electron linear accelerator. The result is satisfactory. This technique makes the beam hardening correction easy and simple, and it is valuable for measuring the ICT density and making use of the CT images to recognize materials. (authors)

  16. Cosmological large-scale structures beyond linear theory in modified gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bernardeau, Francis; Brax, Philippe, E-mail: francis.bernardeau@cea.fr, E-mail: philippe.brax@cea.fr [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette Cédex (France)

    2011-06-01

    We consider the effect of modified gravity on the growth of large-scale structures at second order in perturbation theory. We show that modified gravity models changing the linear growth rate of fluctuations are also bound to change, although mildly, the mode coupling amplitude in the density and reduced velocity fields. We present explicit formulae which describe this effect. We then focus on models of modified gravity involving a scalar field coupled to matter, in particular chameleons and dilatons, where it is shown that there exists a transition scale around which the existence of an extra scalar degree of freedom induces significant changes in the coupling properties of the cosmic fields. We obtain the amplitude of this effect for realistic dilaton models at the tree-order level for the bispectrum, finding them to be comparable in amplitude to those obtained in the DGP and f(R) models.

  17. Scaling versus asymptotic scaling in the non-linear σ-model in 2D. Continuum version

    International Nuclear Information System (INIS)

    Flyvbjerg, H.

    1990-01-01

    The two-point function of the O(N)-symmetric non-linear σ-model in two dimensions is large-N expanded and renormalized, neglecting terms of O(1/N 2 ). At finite cut-off, universal, analytical expressions relate the magnetic susceptibility and the dressed mass to the bare coupling. Removing the cut-off, a similar relation gives the renormalized coupling as a function of the mass gap. In the weak-coupling limit these relations reproduce the results of renormalization group improved weak-coupling perturbation theory to two-loop order. The constant left unknown, when the renormalization group is integrated, is determined here. The approach to asymptotic scaling is studied for various values of N. (orig.)

  18. New computational method for non-LTE, the linear response matrix

    International Nuclear Information System (INIS)

    Fournier, K.B.; Grasiani, F.R.; Harte, J.A.; Libby, S.B.; More, R.M.; Zimmerman, G.B.

    1998-01-01

    My coauthors have done extensive theoretical and computational calculations that lay the ground work for a linear response matrix method to calculate non-LTE (local thermodynamic equilibrium) opacities. I will give briefly review some of their work and list references. Then I will describe what has been done to utilize this theory to create a computational package to rapidly calculate mild non-LTE emission and absorption opacities suitable for use in hydrodynamic calculations. The opacities are obtained by performing table look-ups on data that has been generated with a non-LTE package. This scheme is currently under development. We can see that it offers a significant computational speed advantage. It is suitable for mild non-LTE, quasi-steady conditions. And it offers a new insertion path for high-quality non-LTE data. Currently, the linear response matrix data file is created using XSN. These data files could be generated by more detailed and rigorous calculations without changing any part of the implementation in the hydro code. The scheme is running in Lasnex and is being tested and developed

  19. Computation of the Short-Time Linear Canonical Transform with Dual Window

    Directory of Open Access Journals (Sweden)

    Lei Huang

    2017-01-01

    Full Text Available The short-time linear canonical transform (STLCT, which maps the time domain signal into the joint time and frequency domain, has recently attracted some attention in the area of signal processing. However, its applications are still limited due to the fact that selection of coefficients of the short-time linear canonical series (STLCS is not unique, because time and frequency elementary functions (together known as basis function of STLCS do not constitute an orthogonal basis. To solve this problem, this paper investigates a dual window solution. First, the nonorthogonal problem that suffered from original window is fulfilled by orthogonal condition with dual window. Then based on the obtained condition, a dual window computation approach of the GT is extended to the STLCS. In addition, simulations verify the validity of the proposed condition and solutions. Furthermore, some possible applied directions are discussed.

  20. Z-score linear discriminant analysis for EEG based brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    Full Text Available Linear discriminant analysis (LDA is one of the most popular classification algorithms for brain-computer interfaces (BCI. LDA assumes Gaussian distribution of the data, with equal covariance matrices for the concerned classes, however, the assumption is not usually held in actual BCI applications, where the heteroscedastic class distributions are usually observed. This paper proposes an enhanced version of LDA, namely z-score linear discriminant analysis (Z-LDA, which introduces a new decision boundary definition strategy to handle with the heteroscedastic class distributions. Z-LDA defines decision boundary through z-score utilizing both mean and standard deviation information of the projected data, which can adaptively adjust the decision boundary to fit for heteroscedastic distribution situation. Results derived from both simulation dataset and two actual BCI datasets consistently show that Z-LDA achieves significantly higher average classification accuracies than conventional LDA, indicating the superiority of the new proposed decision boundary definition strategy.

  1. Numerical computation of the linear stability of the diffusion model for crystal growth simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, C.; Sorensen, D.C. [Rice Univ., Houston, TX (United States); Meiron, D.I.; Wedeman, B. [California Institute of Technology, Pasadena, CA (United States)

    1996-12-31

    We consider a computational scheme for determining the linear stability of a diffusion model arising from the simulation of crystal growth. The process of a needle crystal solidifying into some undercooled liquid can be described by the dual diffusion equations with appropriate initial and boundary conditions. Here U{sub t} and U{sub a} denote the temperature of the liquid and solid respectively, and {alpha} represents the thermal diffusivity. At the solid-liquid interface, the motion of the interface denoted by r and the temperature field are related by the conservation relation where n is the unit outward pointing normal to the interface. A basic stationary solution to this free boundary problem can be obtained by writing the equations of motion in a moving frame and transforming the problem to parabolic coordinates. This is known as the Ivantsov parabola solution. Linear stability theory applied to this stationary solution gives rise to an eigenvalue problem of the form.

  2. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  3. Linear and Non-linear Numerical Sea-keeping Evaluation of a Fast Monohull Ferry Compared to Full Scale Measurements

    DEFF Research Database (Denmark)

    Wang, Zhaohui; Folsø, Rasmus; Bondini, Francesca

    1999-01-01

    , full-scale measurements have been performed on board a 128 m monohull fast ferry. This paper deals with the results from these full-scale measurements. The primary results considered are pitch motion, midship vertical bending moment and vertical acceleration at the bow. Previous comparisons between...

  4. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  5. Scaling law for noise variance and spatial resolution in differential phase contrast computed tomography

    International Nuclear Information System (INIS)

    Chen Guanghong; Zambelli, Joseph; Li Ke; Bevins, Nicholas; Qi Zhihua

    2011-01-01

    Purpose: The noise variance versus spatial resolution relationship in differential phase contrast (DPC) projection imaging and computed tomography (CT) are derived and compared to conventional absorption-based x-ray projection imaging and CT. Methods: The scaling law for DPC-CT is theoretically derived and subsequently validated with phantom results from an experimental Talbot-Lau interferometer system. Results: For the DPC imaging method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both in theory and experimental results, in DPC-CT the noise variance scales with spatial resolution following an inverse linear relationship with fixed slice thickness. Conclusions: The scaling law in DPC-CT implies a lesser noise, and therefore dose, penalty for moving to higher spatial resolutions when compared to conventional absorption-based CT in order to maintain the same contrast-to-noise ratio.

  6. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  7. Quadratic-linear pattern in cancer fractional radiotherapy. Equations for a computering program

    International Nuclear Information System (INIS)

    Burgos, D.; Bullejos, J.; Garcia Puche, J.L.; Pedraza, V.

    1990-01-01

    Knowledge of equivalence between different tratment schemes with the same iso-effect is the essential thing in clinical cancer radiotherapy. For this purpose it is very useful the group of ideas derived from quadratic-linear pattern (Q-L) proposed in order to analyze cell survival curve to radiation. Iso-effect definition caused by several irradiation rules is done by extrapolated tolerance dose (ETD). Because equations for ETD are complex, a computering program have been carried out. In this paper, iso-effect equations for well defined therapeutic situations and flow diagram proposed for resolution, have been studied. (Author)

  8. Superconducting resonators as beam splitters for linear-optics quantum computation.

    Science.gov (United States)

    Chirolli, Luca; Burkard, Guido; Kumar, Shwetank; Divincenzo, David P

    2010-06-11

    We propose and analyze a technique for producing a beam-splitting quantum gate between two modes of a ring-resonator superconducting cavity. The cavity has two integrated superconducting quantum interference devices (SQUIDs) that are modulated by applying an external magnetic field. The gate is accomplished by applying a radio frequency pulse to one of the SQUIDs at the difference of the two mode frequencies. Departures from perfect beam splitting only arise from corrections to the rotating wave approximation; an exact calculation gives a fidelity of >0.9992. Our construction completes the toolkit for linear-optics quantum computing in circuit quantum electrodynamics.

  9. Computer codes for three dimensional mass transport with non-linear sorption

    International Nuclear Information System (INIS)

    Noy, D.J.

    1985-03-01

    The report describes the mathematical background and data input to finite element programs for three dimensional mass transport in a porous medium. The transport equations are developed and sorption processes are included in a general way so that non-linear equilibrium relations can be introduced. The programs are described and a guide given to the construction of the required input data sets. Concluding remarks indicate that the calculations require substantial computer resources and suggest that comprehensive preliminary analysis with lower dimensional codes would be important in the assessment of field data. (author)

  10. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  11. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  12. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2016-01-01

    Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  13. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  14. An algebraic approach to linear-optical schemes for deterministic quantum computing

    International Nuclear Information System (INIS)

    Aniello, Paolo; Cagli, Ruben Coen

    2005-01-01

    Linear-optical passive (LOP) devices and photon counters are sufficient to implement universal quantum computation with single photons, and particular schemes have already been proposed. In this paper we discuss the link between the algebraic structure of LOP transformations and quantum computing. We first show how to decompose the Fock space of N optical modes in finite-dimensional subspaces that are suitable for encoding strings of qubits and invariant under LOP transformations (these subspaces are related to the spaces of irreducible unitary representations of U (N). Next we show how to design in algorithmic fashion LOP circuits which implement any quantum circuit deterministically. We also present some simple examples, such as the circuits implementing a cNOT gate and a Bell state generator/analyser

  15. Large-scale dynamo action due to α fluctuations in a linear shear flow

    Science.gov (United States)

    Sridhar, S.; Singh, Nishant K.

    2014-12-01

    We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.

  16. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Kolmanovsky Ilya

    1998-01-01

    Full Text Available This paper considers the characterization and computation of invariant sets for discrete-time, time-invariant, linear systems with disturbance inputs whose values are confined to a specified compact set but are otherwise unknown. The emphasis is on determining maximal disturbance-invariant sets X that belong to a specified subset Γ of the state space. Such d-invariant sets have important applications in control problems where there are pointwise-in-time state constraints of the form χ ( t ∈ Γ . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  17. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  18. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  19. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  20. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  1. The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors

    Directory of Open Access Journals (Sweden)

    Trong-Hieu Tran

    2016-09-01

    Full Text Available This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an “MR reader” stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs and analog-to-digital converters (ADCs. The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB over the input range of 0.5–2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC 0.35-micron complementary metal oxide semiconductor (CMOS technology for verification with a chip size of 6.61 mm2, while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL is −0.79–0.95 LSB while the differential non-linearity (DNL is −0.68–0.72 LSB. The effective number of bits (ENOB of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement

  2. Local Ray-Based Traveltime Computation Using the Linearized Eikonal Equation

    KAUST Repository

    Almubarak, Mohammed S.

    2013-05-01

    The computation of traveltimes plays a critical role in the conventional implementations of Kirchhoff migration. Finite-difference-based methods are considered one of the most effective approaches for traveltime calculations and are therefore widely used. However, these eikonal solvers are mainly used to obtain early-arrival traveltime. Ray tracing can be used to pick later traveltime branches, besides the early arrivals, which may lead to an improvement in velocity estimation or in seismic imaging. In this thesis, I improved the accuracy of the solution of the linearized eikonal equation by constructing a linear system of equations (LSE) based on finite-difference approximation, which is of second-order accuracy. The ill-conditioned LSE is initially regularized and subsequently solved to calculate the traveltime update. Numerical tests proved that this method is as accurate as the second-order eikonal solver. Later arrivals are picked using ray tracing. These traveltimes are binned to the nearest node on a regular grid and empty nodes are estimated by interpolating the known values. The resulting traveltime field is used as an input to the linearized eikonal algorithm, which improves the accuracy of the interpolated nodes and yields a local ray-based traveltime. This is a preliminary study and further investigation is required to test the efficiency and the convergence of the solutions.

  3. An electron beam linear scanning mode for industrial limited-angle nano-computed tomography

    Science.gov (United States)

    Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng

    2018-01-01

    Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.

  4. Theoretical explanation of present mirror experiments and linear stability of larger scaled machines

    International Nuclear Information System (INIS)

    Berk, H.L.; Baldwin, D.E.; Cutler, T.A.; Lodestro, L.L.; Maron, N.; Pearlstein, L.D.; Rognlien, T.D.; Stewart, J.J.; Watson, D.C.

    1976-01-01

    A quasilinear model for the evolution of the 2XIIB mirror experiment is presented and shown to reproduce the time evolution of the experiment. From quasilinear theory it follows that the energy lifetime is the Spitzer electron drag time for T/sub e/ approximately less than 0.1T/sub i/. By computing the stability boundary of the DCLC mode, with warm plasma stabilization, the electron temperature is predicted as a function of radial scale length. In addition, the effect of finite length corrections to the Alfven cyclotron mode is assessed

  5. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  6. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    Science.gov (United States)

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H; Jenkins, C; Yu, S; Yang, Y; Xing, L [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct for camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.

  8. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    Science.gov (United States)

    2017-12-08

    STATISTICAL LINEAR TIME-VARYING SYSTEM MODEL OF HIGH GRAZING ANGLE SEA CLUTTER FOR COMPUTING INTERFERENCE POWER 1. INTRODUCTION Statistical linear time...beam. We can approximate one of the sinc factors using the Dirichlet kernel to facilitate computation of the integral in (6) as follows: ∣∣∣∣sinc(WB...plotted in Figure 4. The resultant autocorrelation can then be found by substituting (18) into (28). The Python code used to generate Figures 1-4 is found

  9. Effect of cellulosic fiber scale on linear and non-linear mechanical performance of starch-based composites.

    Science.gov (United States)

    Karimi, Samaneh; Abdulkhani, Ali; Tahir, Paridah Md; Dufresne, Alain

    2016-10-01

    Cellulosic nanofibers (NFs) from kenaf bast were used to reinforce glycerol plasticized thermoplastic starch (TPS) matrices with varying contents (0-10wt%). The composites were prepared by casting/evaporation method. Raw fibers (RFs) reinforced TPS films were prepared with the same contents and conditions. The aim of study was to investigate the effects of filler dimension and loading on linear and non-linear mechanical performance of fabricated materials. Obtained results clearly demonstrated that the NF-reinforced composites had significantly greater mechanical performance than the RF-reinforced counterparts. This was attributed to the high aspect ratio and nano dimension of the reinforcing agents, as well as their compatibility with the TPS matrix, resulting in strong fiber/matrix interaction. Tensile strength and Young's modulus increased by 313% and 343%, respectively, with increasing NF content from 0 to 10wt%. Dynamic mechanical analysis (DMA) revealed an elevational trend in the glass transition temperature of amylopectin-rich domains in composites. The most eminent record was +18.5°C shift in temperature position of the film reinforced with 8% NF. This finding implied efficient dispersion of nanofibers in the matrix and their ability to form a network and restrict mobility of the system. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  11. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Corsini, Niccolò R. C., E-mail: niccolo.corsini@imperial.ac.uk; Greco, Andrea; Haynes, Peter D. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Hine, Nicholas D. M. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Cavendish Laboratory, J. J. Thompson Avenue, Cambridge CB3 0HE (United Kingdom); Molteni, Carla [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett.94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  12. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  13. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    International Nuclear Information System (INIS)

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  14. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    Science.gov (United States)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  15. Linear perturbation theory for tidal streams and the small-scale CDM power spectrum

    Science.gov (United States)

    Bovy, Jo; Erkal, Denis; Sanders, Jason L.

    2017-04-01

    Tidal streams in the Milky Way are sensitive probes of the population of low-mass dark matter subhaloes predicted in cold dark matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a stream segment by undoing the effect of all relevant impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 105 M⊙, accounting for the stream's internal dispersion and overlapping impacts. We study the statistical properties of density and track fluctuations with large suites of simulations of the effect of subhalo fly-bys. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher mass subhaloes producing power only on large scales, while lower mass subhaloes cause structure on smaller scales. We also find significant density and track bispectra that are observationally accessible. We further demonstrate that different projections of the track all reflect the same pattern of perturbations, facilitating their observational measurement. We apply this formalism to data for the Pal 5 stream and make a first rigorous determination of 10^{+11}_{-6} dark matter subhaloes with masses between 106.5 and 109 M⊙ within 20 kpc from the Galactic centre [corresponding to 1.4^{+1.6}_{-0.9} times the number predicted by CDM-only simulations or to fsub(r matter is clumpy on the smallest scales relevant for galaxy formation.

  16. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  17. Accuracy of Linear Measurements in Stitched Versus Non-Stitched Cone Beam Computed Tomography Images

    International Nuclear Information System (INIS)

    Srimawong, P.; Krisanachinda, A.; Chindasombatjaroen, J.

    2012-01-01

    Cone beam computed tomography images are useful in clinical dentistry. Linear measurements are necessary for accurate treatment planning.Therefore, the accuracy of linear measurements on CBCT images is needed to be verified. Current program called stitching program in Kodak 9000C 3D systems automatically combines up to three localized volumes to construct larger images with small voxel size.The purpose of this study was to assess the accuracy of linear measurements from stitched and non-stitched CBCT images in comparison to direct measurements.This study was performed in 10 human dry mandibles. Gutta-percha rods were marked at reference points to obtain 10 vertical and horizontal distances. Direct measurements by digital caliper were served as gold standard. All distances on CBCT images obtained by using and not using stitching program were measured, and compared with direct measurements.The intraclass correlation coefficients (ICC) were calculated.The ICC of direct measurements were 0.998 to 1.000.The ICC of intraobserver of both non-stitched CBCT images and stitched CBCT images were 1.000 indicated strong agreement made by a single observer.The intermethod ICC between direct measurements vs non-stitched CBCT images and direct measurements vs stitched CBCT images ranged from 0.972 to 1.000 and 0.967 to 0.998, respectively. No statistically significant differences between direct measurements and stitched CBCT images or non-stitched CBCT images (P > 0.05). The results showed that linear measurements on non-stitched and stitched CBCT images were highly accurate with no statistical difference compared to direct measurements. The ICC values in non-stitched and stitched CBCT images and direct measurements of vertical distances were slightly higher than those of horizontal distances. This indicated that the measurements in vertical orientation were more accurate than those in horizontal orientation. However, the differences were not statistically significant. Stitching

  18. Computed-tomographic and conventional linear-tomographic evaluation of tracheobronchial lesions for laser photoresection

    International Nuclear Information System (INIS)

    Pearlberg, J.L.; Sandler, M.A.; Kvale, P.; Beute, G.H.; Madrazo, B.L.

    1985-01-01

    Laser therapy is a new modality for treatment of airway lesions. The authors examined 18 patients prior to laser photoresection of tracheobronchial lesions. Thirteen had cancers involving the distal trachea, carina, and/or proximal bronchi; five had benign lesions of the middle or proximal trachea. Each patient was examined by conventional linear tomography (CLT) and computed tomography (CT). CT was valuable in patients who had lesions of the distal trachea, carina, and/or proximal bronchi. Its particular usefulness, and its advantage relative to CLT, consisted in its ability to delineate vascular structures adjacent to the planned area of photoresection. Neither CLT nor CT was helpful in evaluation of benign lesions of the proximal trachea

  19. Global identifiability of linear compartmental models--a computer algebra algorithm.

    Science.gov (United States)

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  20. Riemann-problem and level-set approaches for two-fluid flow computations I. Linearized Godunov scheme

    NARCIS (Netherlands)

    B. Koren (Barry); M.R. Lewis; E.H. van Brummelen (Harald); B. van Leer

    2001-01-01

    textabstractA finite-volume method is presented for the computation of compressible flows of two immiscible fluids at very different densities. The novel ingredient in the method is a two-fluid linearized Godunov scheme, allowing for flux computations in case of different fluids (e.g., water and

  1. Computer software for linear and nonlinear regression in organic NMR; Programa de computador para regressao linear e nao linear em R.M.N. organica

    Energy Technology Data Exchange (ETDEWEB)

    Canto, Eduardo Leite do; Rittner, Roberto [Universidade Estadual de Campinas, SP (Brazil). Inst. de Quimica

    1992-12-31

    Calculation involving two variable linear regressions, require specific procedures generally not familiar to chemist. For attending the necessity of fast and efficient handling of NMR data, a self explained and Pc portable software has been developed, which allows user to produce and use diskette recorded tables, containing chemical shift or any other substituent physical-chemical measurements and constants ({sigma}{sub T}, {sigma}{sup o}{sub R}, E{sub s}, ...) 9 refs., 1 fig.

  2. SCALE-4 [Standardized Computer Analyses for Licensing Evaluation]: An improved computational system for spent-fuel cask analysis

    International Nuclear Information System (INIS)

    Parks, C.V.

    1989-01-01

    The purpose of this paper is to provide specific information regarding improvements available with Version 4.0 of the SCALE system and discuss the future of SCALE within the current computing and regulatory environment. The emphasis focuses on the improvements in SCALE-4 over that available in SCALE-3. 10 refs., 1 fig., 1 tab

  3. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  4. Computational optimization of catalyst distributions at the nano-scale

    International Nuclear Information System (INIS)

    Ström, Henrik

    2017-01-01

    Highlights: • Macroscopic data sampled from a DSMC simulation contain statistical scatter. • Simulated annealing is evaluated as an optimization algorithm with DSMC. • Proposed method is more robust than a gradient search method. • Objective function uses the mass transfer rate instead of the reaction rate. • Combined algorithm is more efficient than a macroscopic overlay method. - Abstract: Catalysis is a key phenomenon in a great number of energy processes, including feedstock conversion, tar cracking, emission abatement and optimizations of energy use. Within heterogeneous, catalytic nano-scale systems, the chemical reactions typically proceed at very high rates at a gas–solid interface. However, the statistical uncertainties characteristic of molecular processes pose efficiency problems for computational optimizations of such nano-scale systems. The present work investigates the performance of a Direct Simulation Monte Carlo (DSMC) code with a stochastic optimization heuristic for evaluations of an optimal catalyst distribution. The DSMC code treats molecular motion with homogeneous and heterogeneous chemical reactions in wall-bounded systems and algorithms have been devised that allow optimization of the distribution of a catalytically active material within a three-dimensional duct (e.g. a pore). The objective function is the outlet concentration of computational molecules that have interacted with the catalytically active surface, and the optimization method used is simulated annealing. The application of a stochastic optimization heuristic is shown to be more efficient within the present DSMC framework than using a macroscopic overlay method. Furthermore, it is shown that the performance of the developed method is superior to that of a gradient search method for the current class of problems. Finally, the advantages and disadvantages of different types of objective functions are discussed.

  5. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  6. A computer tool for daily application of the linear quadratic model

    International Nuclear Information System (INIS)

    Macias Jaen, J.; Galan Montenegro, P.; Bodineau Gil, C.; Wals Zurita, A.; Serradilla Gil, A.M.

    2001-01-01

    The aim of this paper is to indicate the relevance of the criteria A.S.A.R.A. (As Short As Reasonably Achievable) in the optimization of a fractionated radiotherapy schedule and the presentation of a Windows computer program as an easy tool in order to: Evaluate the Biological Equivalent Dose (BED) in a fractionated schedule; Make comparison between different treatments; Compensate a treatment when a delay has been happened with a version of the Linear Quadratic model that has into account the factor of accelerated repopulation. Conclusions: Delays in the normal radiotherapy schedule are items that have to be controlled as much as possible because it is able to be a very important parameter in order to release a good application of treatment, principally when the tumour is fast growing. It is necessary to evaluate them. ASARA criteria is useful to indicate the relevance of this aspect. Also, computer tools like this one could help us in order to achieve this. (author)

  7. Design of a linear detector array unit for high energy x-ray helical computed tomography and linear scanner

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Tae; Park, Jong Hwan; Kim, Gi Yoon [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Dong Geun [Medical Imaging Department, ASTEL Inc., Seongnam (Korea, Republic of); Park, Shin Woong; Yi, Yun [Dept. of Electronics and Information Eng, Korea University, Seoul (Korea, Republic of); Kim, Hyun Duk [Research Center, Luvantix ADM Co., Ltd., Daejeon (Korea, Republic of)

    2016-11-15

    A linear detector array unit (LdAu) was proposed and designed for the high energy X-ray 2-d and 3-d imaging systems for industrial non-destructive test. Specially for 3-d imaging, a helical CT with a 15 MeV linear accelerator and a curved detector is proposed. the arc-shape detector can be formed by many LdAus all of which are arranged to face the focal spot when the source-to-detector distance is fixed depending on the application. An LdAu is composed of 10 modules and each module has 48 channels of CdWO{sub 4} (CWO) blocks and Si PIn photodiodes with 0.4 mm pitch. this modular design was made for easy manufacturing and maintenance. through the Monte carlo simulation, the CWO detector thickness of 17 mm was optimally determined. the silicon PIn photodiodes were designed as 48 channel arrays and fabricated with NTD (neutron transmutation doping) wafers of high resistivity and showed excellent leakage current properties below 1 nA at 10 V reverse bias. to minimize the low-voltage breakdown, the edges of the active layer and the guard ring were designed as a curved shape. the data acquisition system was also designed and fabricated as three independent functional boards; a sensor board, a capture board and a communication board to a Pc. this paper describes the design of the detectors (CWO blocks and Si PIn photodiodes) and the 3-board data acquisition system with their simulation results.

  8. Simulation of large scale air detritiation operations by computer modeling and bench-scale experimentation

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.

    1978-01-01

    Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms

  9. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    Science.gov (United States)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  10. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  11. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    Energy Technology Data Exchange (ETDEWEB)

    Pavanello, Michele [Department of Chemistry, Rutgers University, Newark, New Jersey 07102-1811 (United States); Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States); Visscher, Lucas [Amsterdam Center for Multiscale Modeling, VU University, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Neugebauer, Johannes [Theoretische Organische Chemie, Organisch-Chemisches Institut der Westfaelischen Wilhelms-Universitaet Muenster, Corrensstrasse 40, 48149 Muenster (Germany)

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  12. Electrostatic interactions in finite systems treated with periodic boundary conditions: application to linear-scaling density functional theory.

    Science.gov (United States)

    Hine, Nicholas D M; Dziedzic, Jacek; Haynes, Peter D; Skylaris, Chris-Kriton

    2011-11-28

    We present a comparison of methods for treating the electrostatic interactions of finite, isolated systems within periodic boundary conditions (PBCs), within density functional theory (DFT), with particular emphasis on linear-scaling (LS) DFT. Often, PBCs are not physically realistic but are an unavoidable consequence of the choice of basis set and the efficacy of using Fourier transforms to compute the Hartree potential. In such cases the effects of PBCs on the calculations need to be avoided, so that the results obtained represent the open rather than the periodic boundary. The very large systems encountered in LS-DFT make the demands of the supercell approximation for isolated systems more difficult to manage, and we show cases where the open boundary (infinite cell) result cannot be obtained from extrapolation of calculations from periodic cells of increasing size. We discuss, implement, and test three very different approaches for overcoming or circumventing the effects of PBCs: truncation of the Coulomb interaction combined with padding of the simulation cell, approaches based on the minimum image convention, and the explicit use of open boundary conditions (OBCs). We have implemented these approaches in the ONETEP LS-DFT program and applied them to a range of systems, including a polar nanorod and a protein. We compare their accuracy, complexity, and rate of convergence with simulation cell size. We demonstrate that corrective approaches within PBCs can achieve the OBC result more efficiently and accurately than pure OBC approaches.

  13. Energy harvesting with stacked dielectric elastomer transducers: Nonlinear theory, optimization, and linearized scaling law

    Science.gov (United States)

    Tutcuoglu, A.; Majidi, C.

    2014-12-01

    Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.

  14. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    Science.gov (United States)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer

  15. Accuracy of linear measurement using cone-beam computed tomography at different reconstruction angles

    International Nuclear Information System (INIS)

    Nikneshan, Nikneshan; Aval, Shadi Hamidi; Bakhshalian, Neema; Shahab, Shahriyar; Mohammadpour, Mahdis; SarikhanI, Soodeh

    2014-01-01

    This study was performed to evaluate the effect of changing the orientation of a reconstructed image on the accuracy of linear measurements using cone-beam computed tomography (CBCT). Forty-two titanium pins were inserted in seven dry sheep mandibles. The length of these pins was measured using a digital caliper with readability of 0.01 mm. Mandibles were radiographed using a CBCT device. When the CBCT images were reconstructed, the orientation of slices was adjusted to parallel (i.e., 0 degrees), +10 degrees, +12 degrees, -12 degrees, and -10 degrees with respect to the occlusal plane. The length of the pins was measured by three radiologists, and the accuracy of these measurements was reported using descriptive statistics and one-way analysis of variance (ANOVA); p<0.05 was considered statistically significant. The differences in radiographic measurements ranged from -0.64 to +0.06 at the orientation of -12 degrees, -0.66 to -0.11 at -10 degrees, -0.51 to +0.19 at 0 degrees, -0.64 to +0.08 at +10 degrees, and -0.64 to +0.1 at +12 degrees. The mean absolute values of the errors were greater at negative orientations than at the parallel position or at positive orientations. The observers underestimated most of the variables by 0.5-0.1 mm (83.6%). In the second set of observations, the reproducibility at all orientations was greater than 0.9. Changing the slice orientation in the range of -12 degrees to +12 degrees reduced the accuracy of linear measurements obtained using CBCT. However, the error value was smaller than 0.5 mm and was, therefore, clinically acceptable.

  16. Performance study of monochromatic synchrotron X-ray computed tomography using a linear array detector

    Energy Technology Data Exchange (ETDEWEB)

    Kazama, Masahiro; Takeda, Tohoru; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akiba, Masahiro; Yuasa, Tetsuya; Hyodo, Kazuyuki; Ando, Masami; Akatsuka, Takao

    1997-09-01

    Monochromatic x-ray computed tomography (CT) using synchrotron radiation (SR) is being developed for detection of non-radioactive contrast materials at low concentration for application in clinical diagnosis. A new SR-CT system with improved contrast resolution, was constructed using a linear array detector which provides wide dynamic ranges and a double monochromator. The performance of this system was evaluated in a phantom and a rat model of brain ischemia. This system consists of a silicon (111) double crystal monochromator, an x-ray shutter, an ionization chamber, x-ray slits, a scanning table for the target organ, and an x-ray linear array detector. The research was carried out at the BLNE-5A bending magnet beam line of the Tristan Accumulation Ring in KEK, Japan. In this experiment, the reconstructed image of the spatial-resolution phantom clearly showed the 1 mm holes. At 1 mm slice thickness, the above K-edge image of the phantom showed contrast resolution at the concentration of 200 {mu}g/ml iodine-based contrast materials whereas the K-edge energy subtraction image showed contrast resolution at the concentration of 500 {mu}g/ml contrast materials. The cerebral arteries filled with iodine microspheres were clearly revealed, and the ischemic regions at the right temporal lobe and frontal lobe were depicted as non-vascular regions. The measured minimal detectable concentration of iodine on the above K-edge image is about 6 times higher than the expected value of 35.3 {mu}g/ml because of the high dark current of this detector. Thus, the use of a CCD detector which is cooled by liquid nitrogen to improve the dynamic range of the detector, is being under construction. (author)

  17. Linear correlation of interfacial tension at water-solvent interface, solubility of water in organic solvents, and SE* scale parameters

    International Nuclear Information System (INIS)

    Mezhov, E.A.; Khananashvili, N.L.; Shmidt, V.S.

    1988-01-01

    A linear correlation has been established between the solubility of water in water-immiscible organic solvents and the interfacial tension at the water-solvent interface on the one hand and the parameters of the SE* and π* scales for these solvents on the other hand. This allows us, using the known tabulated SE* or π* parameters for each solvent, to predict the values of the interfacial tension and the solubility of water for the corresponding systems. We have shown that the SE* scale allows us to predict these values more accurately than other known solvent scales, since in contrast to other scales it characterizes solvents found in equilibrium with water

  18. Canonical-ensemble extended Lagrangian Born-Oppenheimer molecular dynamics for the linear scaling density functional theory.

    Science.gov (United States)

    Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi

    2017-10-11

    We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.

  19. Homogenization of linear viscoelastic three phase media: internal variable formulation versus full-field computation

    International Nuclear Information System (INIS)

    Blanc, V.; Barbie, L.; Masson, R.

    2011-01-01

    Homogenization of linear viscoelastic heterogeneous media is here extended from two phase inclusion-matrix media to three phase inclusion-matrix media. Each phase obeying to a compressible Maxwellian behaviour, this analytic method leads to an equivalent elastic homogenization problem in the Laplace-Carson space. For some particular microstructures, such as the Hashin composite sphere assemblage, an exact solution is obtained. The inversion of the Laplace-Carson transforms of the overall stress-strain behaviour gives in such cases an internal variable formulation. As expected, the number of these internal variables and their evolution laws are modified to take into account the third phase. Moreover, evolution laws of averaged stresses and strains per phase can still be derived for three phase media. Results of this model are compared to full fields computations of representative volume elements using finite element method, for various concentrations and sizes of inclusion. Relaxation and creep test cases are performed in order to compare predictions of the effective response. The internal variable formulation is shown to yield accurate prediction in both cases. (authors)

  20. Demonstration of feed-forward control for linear optics quantum computation

    International Nuclear Information System (INIS)

    Pittman, T.B.; Jacobs, B.C.; Franson, J.D.

    2002-01-01

    One of the main requirements in linear optics quantum computing is the ability to perform single-qubit operations that are controlled by classical information fed forward from the output of single-photon detectors. These operations correspond to predetermined combinations of phase corrections and bit flips that are applied to the postselected output modes of nondeterministic quantum logic devices. Corrections of this kind are required in order to obtain the correct logical output for certain detection events, and their use can increase the overall success probability of the devices. In this paper, we report on the experimental demonstration of the use of this type of feed-forward system to increase the probability of success of a simple nondeterministic quantum logic operation from approximately (1/4) to (1/2). This logic operation involves the use of one target qubit and one ancilla qubit which, in this experiment, are derived from a parametric down-conversion photon pair. Classical information describing the detection of the ancilla photon is fed forward in real time and used to alter the quantum state of the output photon. A fiber-optic delay line is used to store the output photon until a polarization-dependent phase shift can be applied using a high-speed Pockels cell

  1. Stimulated Emission Computed Tomography (NSECT) images enhancement using a linear filter in the frequency domain

    Energy Technology Data Exchange (ETDEWEB)

    Viana, Rodrigo S.S.; Tardelli, Tiago C.; Yoriyaz, Helio, E-mail: hyoriyaz@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Jackowski, Marcel P., E-mail: mjack@ime.usp.b [University of Sao Paulo (USP), SP (Brazil). Dept. of Computer Science

    2011-07-01

    In recent years, a new technique for in vivo spectrographic imaging of stable isotopes was presented as Neutron Stimulated Emission Computed Tomography (NSECT). In this technique, a fast neutrons beam stimulates stable nuclei in a sample, which emit characteristic gamma radiation. The photon energy is unique and is used to identify the emitting nuclei. The emitted gamma energy spectra can be used for reconstruction of the target tissue image and for determination of the tissue elemental composition. Due to the stochastic nature of photon emission process by irradiated tissue, one of the most suitable algorithms for tomographic reconstruction is the Expectation-Maximization (E-M) algorithm, once on its formulation are considered simultaneously the probabilities of photons emission and detection. However, a disadvantage of this algorithm is the introduction of noise in the reconstructed image as the number of iterations increases. This increase can be caused either by features of the algorithm itself or by the low sampling rate of projections used for tomographic reconstruction. In this work, a linear filter in the frequency domain was used in order to improve the quality of the reconstructed images. (author)

  2. Stimulated Emission Computed Tomography (NSECT) images enhancement using a linear filter in the frequency domain

    International Nuclear Information System (INIS)

    Viana, Rodrigo S.S.; Tardelli, Tiago C.; Yoriyaz, Helio; Jackowski, Marcel P.

    2011-01-01

    In recent years, a new technique for in vivo spectrographic imaging of stable isotopes was presented as Neutron Stimulated Emission Computed Tomography (NSECT). In this technique, a fast neutrons beam stimulates stable nuclei in a sample, which emit characteristic gamma radiation. The photon energy is unique and is used to identify the emitting nuclei. The emitted gamma energy spectra can be used for reconstruction of the target tissue image and for determination of the tissue elemental composition. Due to the stochastic nature of photon emission process by irradiated tissue, one of the most suitable algorithms for tomographic reconstruction is the Expectation-Maximization (E-M) algorithm, once on its formulation are considered simultaneously the probabilities of photons emission and detection. However, a disadvantage of this algorithm is the introduction of noise in the reconstructed image as the number of iterations increases. This increase can be caused either by features of the algorithm itself or by the low sampling rate of projections used for tomographic reconstruction. In this work, a linear filter in the frequency domain was used in order to improve the quality of the reconstructed images. (author)

  3. Piecewise-linear and bilinear approaches to nonlinear differential equations approximation problem of computational structural mechanics

    OpenAIRE

    Leibov Roman

    2017-01-01

    This paper presents a bilinear approach to nonlinear differential equations system approximation problem. Sometimes the nonlinear differential equations right-hand sides linearization is extremely difficult or even impossible. Then piecewise-linear approximation of nonlinear differential equations can be used. The bilinear differential equations allow to improve piecewise-linear differential equations behavior and reduce errors on the border of different linear differential equations systems ...

  4. Quantization of liver tissue in dual kVp computed tomography using linear discriminant analysis

    Science.gov (United States)

    Tkaczyk, J. Eric; Langan, David; Wu, Xiaoye; Xu, Daniel; Benson, Thomas; Pack, Jed D.; Schmitz, Andrea; Hara, Amy; Palicek, William; Licato, Paul; Leverentz, Jaynne

    2009-02-01

    Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect. Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast, projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual energy CT.

  5. PCG: A software package for the iterative solution of linear systems on scalar, vector and parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Joubert, W. [Los Alamos National Lab., NM (United States); Carey, G.F. [Univ. of Texas, Austin, TX (United States)

    1994-12-31

    A great need exists for high performance numerical software libraries transportable across parallel machines. This talk concerns the PCG package, which solves systems of linear equations by iterative methods on parallel computers. The features of the package are discussed, as well as techniques used to obtain high performance as well as transportability across architectures. Representative numerical results are presented for several machines including the Connection Machine CM-5, Intel Paragon and Cray T3D parallel computers.

  6. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  7. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    Science.gov (United States)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  8. Theoretical science and the future of large scale computing

    International Nuclear Information System (INIS)

    Wilson, K.G.

    1983-01-01

    The author describes the application of computer simulation to physical problems. In this connection the FORTRAN language is considered. Furthermore the application of computer networks is described whereby the processing of experimental data is considered. (HSI).

  9. Bond-based linear indices in QSAR: computational discovery of novel anti-trichomonal compounds

    Science.gov (United States)

    Marrero-Ponce, Yovani; Meneses-Marcel, Alfredo; Rivera-Borroto, Oscar M.; García-Domenech, Ramón; De Julián-Ortiz, Jesus Vicente; Montero, Alina; Escario, José Antonio; Barrio, Alicia Gómez; Pereira, David Montero; Nogal, Juan José; Grau, Ricardo; Torrens, Francisco; Vogel, Christian; Arán, Vicente J.

    2008-08-01

    Trichomonas vaginalis ( Tv) is the causative agent of the most common, non-viral, sexually transmitted disease in women and men worldwide. Since 1959, metronidazole (MTZ) has been the drug of choice in the systemic treatment of trichomoniasis. However, resistance to MTZ in some patients and the great cost associated with the development of new trichomonacidals make necessary the development of computational methods that shorten the drug discovery pipeline. Toward this end, bond-based linear indices, new TOMOCOMD-CARDD molecular descriptors, and linear discriminant analysis were used to discover novel trichomonacidal chemicals. The obtained models, using non-stochastic and stochastic indices, are able to classify correctly 89.01% (87.50%) and 82.42% (84.38%) of the chemicals in the training (test) sets, respectively. These results validate the models for their use in the ligand-based virtual screening. In addition, they show large Matthews' correlation coefficients ( C) of 0.78 (0.71) and 0.65 (0.65) for the training (test) sets, correspondingly. The result of predictions on the 10% full-out cross-validation test also evidences the robustness of the obtained models. Later, both models are applied to the virtual screening of 12 compounds already proved against Tv. As a result, they correctly classify 10 out of 12 (83.33%) and 9 out of 12 (75.00%) of the chemicals, respectively; which is the most important criterion for validating the models. Besides, these classification functions are applied to a library of seven chemicals in order to find novel antitrichomonal agents. These compounds are synthesized and tested for in vitro activity against Tv. As a result, experimental observations approached to theoretical predictions, since it was obtained a correct classification of 85.71% (6 out of 7) of the chemicals. Moreover, out of the seven compounds that are screened, synthesized and biologically assayed, six compounds (VA7-34, VA7-35, VA7-37, VA7-38, VA7-68, VA7-70) show

  10. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  11. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  12. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  13. Instructional Supports for Representational Fluency in Solving Linear Equations with Computer Algebra Systems and Paper-and-Pencil

    Science.gov (United States)

    Fonger, Nicole L.; Davis, Jon D.; Rohwer, Mary Lou

    2018-01-01

    This research addresses the issue of how to support students' representational fluency--the ability to create, move within, translate across, and derive meaning from external representations of mathematical ideas. The context of solving linear equations in a combined computer algebra system (CAS) and paper-and-pencil classroom environment is…

  14. A linear programming computational framework integrates phosphor-proteomics and prior knowledge to predict drug efficacy.

    Science.gov (United States)

    Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei

    2017-12-21

    In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.

  15. On Feature Extraction from Large Scale Linear LiDAR Data

    Science.gov (United States)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are

  16. Evaluation of non-linear blending in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Holmes, David R.; Fletcher, Joel G.; Apel, Anja; Huprich, James E.; Siddiki, Hassan; Hough, David M.; Schmidt, Bernhard; Flohr, Thomas G.; Robb, Richard; McCollough, Cynthia; Wittmer, Michael; Eusemann, Christian

    2008-01-01

    Dual-energy CT scanning has significant potential for disease identification and classification. However, it dramatically increases the amount of data collected and therefore impacts the clinical workflow. One way to simplify image review is to fuse CT datasets of different tube energies into a unique blended dataset with desirable properties. A non-linear blending method based on a modified sigmoid function was compared to a standard 0.3 linear blending method. The methods were evaluated in both a liver phantom and patient study. The liver phantom contained six syringes of known CT contrast which were placed in a bovine liver. After scanning at multiple tube currents (45, 55, 65, 75, 85, 95, 105, and 115 mAs for the 140-kV tube), the datasets were blended using both methods. A contrast-to-noise (CNR) measure was calculated for each syringe. In addition, all eight scans were normalized using the effective dose and statistically compared. In the patient study, 45 dual-energy CT scans were retrospectively mixed using the 0.3 linear blending and modified sigmoid blending functions. The scans were compared visually by two radiologists. For the 15, 45, and 64 HU syringes, the non-linear blended images exhibited similar CNR to the linear blended images; however, for the 79, 116, and 145 HU syringes, the non-linear blended images consistently had a higher CNR across dose settings. The radiologists qualitatively preferred the non-linear blended images of the phantom. In the patient study, the radiologists preferred non-linear blending in 31 of 45 cases with a strong preference in bowel and liver cases. Non-linear blending of dual energy data can provide an improvement in CNR over linear blending and is accompanied by a visual preference for non-linear blended images. Further study on selection of blending parameters and lesion conspicuity in non-linear blended images is being pursued

  17. Thresholds, switches and hysteresis in hydrology from the pedon to the catchment scale: a non-linear systems theory

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.

  18. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  19. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    Science.gov (United States)

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  20. How many accelerograms to use and how to deal with scattering for transient non-linear seismic computations?

    International Nuclear Information System (INIS)

    Viallet, E.; Heinfling, G.

    2005-01-01

    Due to increased potentialities of computers, it is nowadays possible to perform dynamic non-linear computation of structures to evaluate their ultimate behavior under seismic loads using refined finite element models. Nevertheless, one key parameter for such complex computations is the input load (i.e. input time histories) which may lead to important discrepancies in the results and therefore difficulties to deal with for engineering purpose (variability, number of time histories to use...). In this situation, the number of accelerograms to be used and the way to deal with the results is to be carefully assessed. The objective of this study is to give some elements concerning (i) the number of accelerograms to be used for transient non-linear computations and (ii) the way to account for scattering of results. For this purpose, some simplified non-linear models are used. These models represent characteristic types of non-linearities such as : - Reinforce concrete (RC) structure model (with plastic non-linearity), - PWR core model (with impact non-linearity). For each type of non-linearity, different sets of accelerograms are used (artificial and natural ones). Each set is composed of a relatively high number of accelerograms in order to get proper trends. The results are expressed in term of average and standard deviation values of the characteristic parameters for each non-linearity (i.e. ductility drift for RC structure model and impact force for PWR core model). The results show that, a relatively large number of time histories may be necessary to get proper predictions of the average value of the characteristic non-linear parameter under consideration. In that situation, it should be difficult to deal with such a result for complex studies on reel structures. Nevertheless, it may be necessarily to perform transient non-linear seismic computations for design analyses but with a reduced number of calculations. For this purpose, the previous results are analyzed

  1. Accuracy and Reliability of Cone-Beam Computed Tomography for Linear and Volumetric Mandibular Condyle Measurements. A Human Cadaver Study.

    Science.gov (United States)

    García-Sanz, Verónica; Bellot-Arcís, Carlos; Hernández, Virginia; Serrano-Sánchez, Pedro; Guarinos, Juan; Paredes-Gallardo, Vanessa

    2017-09-20

    The accuracy of Cone-Beam Computed Tomography (CBCT) on linear and volumetric measurements on condyles has only been assessed on dry skulls. The aim of this study was to evaluate the reliability and accuracy of linear and volumetric measurements of mandibular condyles in the presence of soft tissues using CBCT. Six embalmed cadaver heads were used. CBCT scans were taken, followed by the extraction of the condyles. The water displacement technique was used to calculate the volumes of the condyles and three linear measurements were made using a digital caliper, these measurements serving as the gold standard. Surface models of the condyles were obtained using a 3D scanner, and superimposed onto the CBCT images. Condyles were isolated on the CBCT render volume using the surface models as reference and volumes were measured. Linear measurements were made on CBCT slices. The CBCT method was found to be reliable for both volumetric and linear measurements (CV  0.90). Highly accurate values were obtained for the three linear measurements and volume. CBCT is a reliable and accurate method for taking volumetric and linear measurements on mandibular condyles in the presence of soft tissue, and so a valid tool for clinical diagnosis.

  2. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Science.gov (United States)

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  3. Single-polymer dynamics under constraints: scaling theory and computer experiment

    International Nuclear Information System (INIS)

    Milchev, Andrey

    2011-01-01

    The relaxation, diffusion and translocation dynamics of single linear polymer chains in confinement is briefly reviewed with emphasis on the comparison between theoretical scaling predictions and observations from experiment or, most frequently, from computer simulations. Besides cylindrical, spherical and slit-like constraints, related problems such as the chain dynamics in a random medium and the translocation dynamics through a nanopore are also considered. Another particular kind of confinement is imposed by polymer adsorption on attractive surfaces or selective interfaces-a short overview of single-chain dynamics is also contained in this survey. While both theory and numerical experiments consider predominantly coarse-grained models of self-avoiding linear chain molecules with typically Rouse dynamics, we also note some recent studies which examine the impact of hydrodynamic interactions on polymer dynamics in confinement. In all of the aforementioned cases we focus mainly on the consequences of imposed geometric restrictions on single-chain dynamics and try to check our degree of understanding by assessing the agreement between theoretical predictions and observations. (topical review)

  4. Computer modelling of a linear turbine for extracting energy from slow-flowing waters

    International Nuclear Information System (INIS)

    Raykov, Plamen

    2014-01-01

    The aim of the paper is to describe the main relationships in the process of designing linear chain turbines with blades and their accompanying devices for obtaining energy from slow flowing waters. Based on the shortcomings of previous types of linear turbines a new concept for arrangement of the blades angles with respect to the flowing water was developed. The dependencies of the geometrical parameters of designed new type linear water turbine and the force applied by the flowing water to the blades are obtained. The optimal relationship between velocity of stream water and extracted power is calculated. The ratio between power characteristics of the extracted energy for different speeds of blades and inclination angle are presented. On the basis of the theoretical results a new linear turbine prototype with inclined blades was designed. Key words: water power system, blade-chain devices, linear turbines

  5. Linear and non-linear optics of nano-scale 2‧,7‧dichloro-fluorescein/FTO optical system: Bandgap and dielectric analysis

    Science.gov (United States)

    Iqbal, Javed; Yahia, I. S.; Zahran, H. Y.; AlFaify, S.; AlBassam, A. M.; El-Naggar, A. M.

    2016-12-01

    2‧,7‧ dichloro-Fluorescein (DCF) is a promising organic semiconductor material in different technological aspects such as solar cell, photodiode, Schottky diode. DCF thin film/conductive glass (FTO glass) was prepared by a low-cost spin coating technique. The spectrophotometric data such as the absorbance, reflectance and transmittance were cogitated in the 350-2500 nm wavelength range, at the normal incidence. The absorption (n) and linear refractive indices (k) were computed using the Fresnel's equations. The optical band gap was evaluated and it was found that there is two band gap described as follows: (1) It is related to the band gap of FTO/glass which is equal 3.4 eV and (2) the second one is related to the absorption edge of DCF equals 2.25 eV. The non-linear parameters such as the refractive index (n2) and optical susceptibility χ(3) were evaluated by the spectroscopic method based on the refractive index. Both (n2) and χ(3) increased rapidly on increasing the wavelength with redshift absorption. Our work represents a new idea about using FTO glass for a new generation of the optical device and technology.

  6. Non-linear optics of nano-scale pentacene thin film

    Science.gov (United States)

    Yahia, I. S.; Alfaify, S.; Jilani, Asim; Abdel-wahab, M. Sh.; Al-Ghamdi, Attieh A.; Abutalib, M. M.; Al-Bassam, A.; El-Naggar, A. M.

    2016-07-01

    We have found the new ways to investigate the linear/non-linear optical properties of nanostructure pentacene thin film deposited by thermal evaporation technique. Pentacene is the key material in organic semiconductor technology. The existence of nano-structured thin film was confirmed by atomic force microscopy and X-ray diffraction. The wavelength-dependent transmittance and reflectance were calculated to observe the optical behavior of the pentacene thin film. It has been observed the anomalous dispersion at wavelength λ 800. The non-linear refractive index of the deposited films was investigated. The linear optical susceptibility of pentacene thin film was calculated, and we observed the non-linear optical susceptibility of pentacene thin film at about 6 × 10-13 esu. The advantage of this work is to use of spectroscopic method to calculate the liner and non-liner optical response of pentacene thin films rather than expensive Z-scan. The calculated optical behavior of the pentacene thin films could be used in the organic thin films base advanced optoelectronic devices such as telecommunications devices.

  7. A large-scale evaluation of computational protein function prediction

    NARCIS (Netherlands)

    Radivojac, P.; Clark, W.T.; Oron, T.R.; Schnoes, A.M.; Wittkop, T.; Kourmpetis, Y.A.I.; Dijk, van A.D.J.; Friedberg, I.

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be

  8. A computer literacy scale for newly enrolled nursing college students: development and validation.

    Science.gov (United States)

    Lin, Tung-Cheng

    2011-12-01

    Increasing application and use of information systems and mobile technologies in the healthcare industry require increasing nurse competency in computer use. Computer literacy is defined as basic computer skills, whereas computer competency is defined as the computer skills necessary to accomplish job tasks. Inadequate attention has been paid to computer literacy and computer competency scale validity. This study developed a computer literacy scale with good reliability and validity and investigated the current computer literacy of newly enrolled students to develop computer courses appropriate to students' skill levels and needs. This study referenced Hinkin's process to develop a computer literacy scale. Participants were newly enrolled first-year undergraduate students, with nursing or nursing-related backgrounds, currently attending a course entitled Information Literacy and Internet Applications. Researchers examined reliability and validity using confirmatory factor analysis. The final version of the developed computer literacy scale included six constructs (software, hardware, multimedia, networks, information ethics, and information security) and 22 measurement items. Confirmatory factor analysis showed that the scale possessed good content validity, reliability, convergent validity, and discriminant validity. This study also found that participants earned the highest scores for the network domain and the lowest score for the hardware domain. With increasing use of information technology applications, courses related to hardware topic should be increased to improve nurse problem-solving abilities. This study recommends that emphases on word processing and network-related topics may be reduced in favor of an increased emphasis on database, statistical software, hospital information systems, and information ethics.

  9. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  10. Computational methods for criticality safety analysis within the scale system

    International Nuclear Information System (INIS)

    Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.

    1986-01-01

    The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs

  11. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  12. Developing a New Computer Game Attitude Scale for Taiwanese Early Adolescents

    Science.gov (United States)

    Liu, Eric Zhi-Feng; Lee, Chun-Yi; Chen, Jen-Huang

    2013-01-01

    With ever increasing exposure to computer games, gaining an understanding of the attitudes held by young adolescents toward such activities is crucial; however, few studies have provided scales with which to accomplish this. This study revisited the Computer Game Attitude Scale developed by Chappell and Taylor in 1997, reworking the overall…

  13. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  14. Testing linear growth rate formulas of non-scale endogenous growth models

    NARCIS (Netherlands)

    Ziesemer, Thomas

    2017-01-01

    Endogenous growth theory has produced formulas for steady-state growth rates of income per capita which are linear in the growth rate of the population. Depending on the details of the models, slopes and intercepts are positive, zero or negative. Empirical tests have taken over the assumption of

  15. High-performance small-scale solvers for linear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    , with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...

  16. Towards TeV-scale electron-positron collisions: the Compact Linear Collider (CLIC)

    Science.gov (United States)

    Doebert, Steffen; Sicking, Eva

    2018-02-01

    The Compact Linear Collider (CLIC), a future electron-positron collider at the energy frontier, has the potential to change our understanding of the universe. Proposed to follow the Large Hardron Collider (LHC) programme at CERN, it is conceived for precision measurements as well as for searches for new phenomena.

  17. The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators

    International Nuclear Information System (INIS)

    Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.

    2011-01-01

    Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.

  18. A method for computing the stationary points of a function subject to linear equality constraints

    International Nuclear Information System (INIS)

    Uko, U.L.

    1989-09-01

    We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs

  19. Lattice QCD - a challenge in large scale computing

    International Nuclear Information System (INIS)

    Schilling, K.

    1987-01-01

    The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean space-time-world. The talk will attempt to introduce to the present state of the art of spectrum calculations by lattice simulations. (orig.)

  20. Non-Linear Metamodeling Extensions to the Robust Parameter Design of Computer Simulations

    Science.gov (United States)

    2016-09-15

    The combined-array RSM approach has been applied to a piston simulation [11] and an economic order quantity inventory model [12, 13]. A textbook ...are limited when applied to simulations. In the former case, the mean and variance models can be inadequate due to a high level of non-linearity...highly non-linear nature of typical simulations. In the multi-response RPD problem, the objective is to find the optimal control parameter levels

  1. Multi-scale and multi-domain computational astrophysics.

    Science.gov (United States)

    van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies

    2014-08-06

    Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Modification of grey scale in computer tomographic images

    International Nuclear Information System (INIS)

    Hemmingsson, A.; Jung, B.

    1980-01-01

    Optimum perception of minute but relevant attenuation differences in CT images often requires display window settings so narrow that a considerable fraction of the image appears completely black or white and consequently without structure. In order to improve the display characteristics two principles of grey scale modification are presented. In one method the pixel contents are displayed unchanged within a selectable attenuation band but moved towards the limits of the band for pixels that are outside it. In the other the grey scale is arranged to a constant number of pixels per grey scale interval. (Auth.)

  3. The Great Chains of Computing: Informatics at Multiple Scales

    Directory of Open Access Journals (Sweden)

    Kevin Kirby

    2011-10-01

    Full Text Available The perspective from which information processing is pervasive in the universe has proven to be an increasingly productive one. Phenomena from the quantum level to social networks have commonalities that can be usefully explicated using principles of informatics. We argue that the notion of scale is particularly salient here. An appreciation of what is invariant and what is emergent across scales, and of the variety of different types of scales, establishes a useful foundation for the transdiscipline of informatics. We survey the notion of scale and use it to explore the characteristic features of information statics (data, kinematics (communication, and dynamics (processing. We then explore the analogy to the principles of plenitude and continuity that feature in Western thought, under the name of the "great chain of being", from Plato through Leibniz and beyond, and show that the pancomputational turn is a modern counterpart of this ruling idea. We conclude by arguing that this broader perspective can enhance informatics pedagogy.

  4. Imprint of non-linear effects on HI intensity mapping on large scales

    Energy Technology Data Exchange (ETDEWEB)

    Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  5. Large scale computing in the Energy Research Programs

    International Nuclear Information System (INIS)

    1991-05-01

    The Energy Research Supercomputer Users Group (ERSUG) comprises all investigators using resources of the Department of Energy Office of Energy Research supercomputers. At the December 1989 meeting held at Florida State University (FSU), the ERSUG executive committee determined that the continuing rapid advances in computational sciences and computer technology demanded a reassessment of the role computational science should play in meeting DOE's commitments. Initial studies were to be performed for four subdivisions: (1) Basic Energy Sciences (BES) and Applied Mathematical Sciences (AMS), (2) Fusion Energy, (3) High Energy and Nuclear Physics, and (4) Health and Environmental Research. The first two subgroups produced formal subreports that provided a basis for several sections of this report. Additional information provided in the AMS/BES is included as Appendix C in an abridged form that eliminates most duplication. Additionally, each member of the executive committee was asked to contribute area-specific assessments; these assessments are included in the next section. In the following sections, brief assessments are given for specific areas, a conceptual model is proposed that the entire computational effort for energy research is best viewed as one giant nation-wide computer, and then specific recommendations are made for the appropriate evolution of the system

  6. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  7. Comparison of radiation absorbed dose in target organs in maxillofacial imaging with panoramic, conventional linear tomography, cone beam computed tomography and computed tomography

    Directory of Open Access Journals (Sweden)

    Panjnoush M.

    2009-12-01

    Full Text Available "nBackground and Aim: The objective of this study was to measure and compare the tissue absorbed dose in thyroid gland, salivary glands, eye and skin in maxillofacial imaging with panoramic, conventional linear tomography, cone beam computed tomography (CBCT and computed tomography (CT."nMaterials and Methods: Thermoluminescent dosimeters (TLD were implanted in 14 sites of RANDO phantom to measure average tissue absorbed dose in thyroid gland, parotid glands, submandibular glands, sublingual gland, lenses and buccal skin. The Promax (PLANMECA, Helsinki, Finland unit was selected for Panoramic, conventional linear tomography and cone beam computed tomography examinations and spiral Hispeed/Fxi (General Electric,USA was selected for CT examination. The average tissue absorbed doses were used for the calculation of the equivalent and effective doses in each organ."nResults: The average absorbed dose for Panoramic ranged from 0.038 mGY (Buccal skin to 0.308 mGY (submandibular gland, linear tomography ranged from 0.048 mGY (Lens to 0.510 mGY (submandibular gland,CBCT ranged from 0.322 mGY (thyroid glad to 1.144 mGY (Parotid gland and in CT ranged from 2.495 mGY (sublingual gland to 3.424 mGY (submandibular gland. Total effective dose in CBCT is 5 times greater than Panoramic and 4 times greater than linear tomography, and in CT, 30 and 22 times greater than Panoramic and linear tomography, respectively. Total effective dose in CT is 6 times greater than CBCT."nConclusion: For obtaining 3-dimensional (3D information in maxillofacial region, CBCT delivers the lower dose than CT, and should be preferred over a medical CT imaging. Furthermore, during maxillofacial imaging, salivary glands receive the highest dose of radiation.

  8. On computation of C-stationary points for equilibrium problems with linear complementarity constraints via homotopy method

    Czech Academy of Sciences Publication Activity Database

    Červinka, Michal

    2010-01-01

    Roč. 2010, č. 4 (2010), s. 730-753 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with complementarity constraints * homotopy * C-stationarity Subject RIV: BC - Control Systems Theory Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/cervinka-on computation of c-stationary points for equilibrium problems with linear complementarity constraints via homotopy method.pdf

  9. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  10. A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    Science.gov (United States)

    Chou, Jack C. K.

    1989-01-01

    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies.

  11. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  12. DESIGN OF EDUCATIONAL PROBLEMS ON LINEAR PROGRAMMING USING SYSTEMS OF COMPUTER MATHEMATICS

    Directory of Open Access Journals (Sweden)

    Volodymyr M. Mykhalevych

    2013-11-01

    Full Text Available From a perspective of the theory of educational problems a problem of substitution in the conditions of ICT use of one discipline by an educational problem of another discipline is represented. Through the example of mathematical problems of linear programming it is showed that a student’s method of operation in the course of an educational problem solving is determinant in the identification of an educational problem in relation to a specific discipline: linear programming, informatics, mathematical modeling, methods of optimization, automatic control theory, calculus etc. It is substantiated the necessity of linear programming educational problems renovation with the purpose of making students free of bulky similar arithmetic calculations and notes which often becomes a barrier to a deeper understanding of key ideas taken as a basis of algorithms used by them.

  13. Effect of Process Parameters on Friction Model in Computer Simulation of Linear Friction Welding

    Directory of Open Access Journals (Sweden)

    A. Yamileva

    2014-07-01

    Full Text Available The friction model is important part of a numerical model of linear friction welding. Its selection determines the accuracy of the results. Existing models employ the classical law of Amonton-Coulomb where the friction coefficient is either constant or linearly dependent on a single parameter. Determination of the coefficient of friction is a time consuming process that requires a lot of experiments. So the feasibility of determinating the complex dependence should be assessing by analysis of effect of approximating law for friction model on simulation results.

  14. Computer design of porous active materials at different dimensional scales

    Science.gov (United States)

    Nasedkin, Andrey

    2017-12-01

    The paper presents a mathematical and computer modeling of effective properties of porous piezoelectric materials of three types: with ordinary porosity, with metallized pore surfaces, and with nanoscale porosity structure. The described integrated approach includes the effective moduli method of composite mechanics, simulation of representative volumes, and finite element method.

  15. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  16. Vanishing-Overhead Linear-Scaling Random Phase Approximation by Cholesky Decomposition and an Attenuated Coulomb-Metric.

    Science.gov (United States)

    Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian

    2017-04-11

    A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.

  17. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    Science.gov (United States)

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  18. A large-scale linear complementarity model of the North American natural gas market

    International Nuclear Information System (INIS)

    Gabriel, Steven A.; Jifang Zhuang; Kiet, Supat

    2005-01-01

    The North American natural gas market has seen significant changes recently due to deregulation and restructuring. For example, third party marketers can contract for transportation and purchase of gas to sell to end-users. While the intent was a more competitive market, the potential for market power exists. We analyze this market using a linear complementarity equilibrium model including producers, storage and peak gas operators, third party marketers and four end-use sectors. The marketers are depicted as Nash-Cournot players determining supply to meet end-use consumption, all other players are in perfect competition. Results based on National Petroleum Council scenarios are presented. (Author)

  19. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

    Energy Technology Data Exchange (ETDEWEB)

    Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Swiss Institute of Bioinformatics (SIB), CH-1015 Lausanne (Switzerland); Stefaniuk, Adam Jan [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2015-07-28

    Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as how mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.

  20. Dynamics of complexation of a charged dendrimer by linear polyelectrolyte: Computer modelling

    NARCIS (Netherlands)

    Lyulin, S.V.; Darinskii, A.A.; Lyulin, A.V.

    2007-01-01

    Brownian-dynamics simulations have been performed for complexes formed by a charged dendrimer and a long oppositely charged linear polyelectrolyte when overcharging phenomenon is always observed. After a complex formation the orientational mobility of the individual dendrimer bonds, the fluctuations

  1. Transforming an Introductory Linear Algebra Course with a TI-92 Hand-Held Computer.

    Science.gov (United States)

    Quesada, Antonio R.

    2003-01-01

    Describes how the introduction of the TI-92 transformed a traditional first semester linear algebra course into a matrix-oriented course that emphasized conceptual understanding, relevant applications, and numerical issues. Indicates an increase in students' overall performance as they found the calculator very useful, believed it helped them…

  2. Standardizing Scale Height Computation of Maven Ngims Neutral Data and Variations Between Exobase and Homeopause Scale Heights

    Science.gov (United States)

    Elrod, M. K.; Slipski, M.; Curry, S.; Williamson, H. N.; Benna, M.; Mahaffy, P. R.

    2017-12-01

    The MAVEN NGIMS team produces a level 3 product which includes the computation of Ar scale height an atmospheric temperatures at 200 km. In the latest version (v05_r01) this has been revised to include scale height fits for CO2, N2 O and CO. Members of the MAVEN team have used various methods to compute scale heights leading to significant variations in scale height values depending on fits and techniques within a few orbits even, occasionally, the same pass. Additionally fitting scale heights in a very stable atmosphere like the day side vs night side can have different results based on boundary conditions. Currently, most methods only compute Ar scale heights as it is most stable and reacts least with the instrument. The NGIMS team has chosen to expand these fitting techniques to include fitted scale heights for CO2, N2, CO, and O. Having compared multiple techniques, the method found to be most reliable for most conditions was determined to be a simple fit method. We have focused this to a fitting method that determines the exobase altidude of the CO2 atmosphere as a maximum altitude for the highest point for fitting, and uses the periapsis as the lowest point and then fits the altitude versus log(density). The slope of altitude vs log(density) is -1/H where H is the scale height of the atmosphere for each species. Since this is between the homeopause and the exobase, each species will have a different scale height by this point. This is being released as a new standardization for the level 3 product, with the understanding that scientists and team members will continue to compute more precise scale heights and temperatures as needed based on science and model demands. This is being released in the PDS NGIMS level 3 v05 files for August 2017. Additionally, we are examining these scale heights for variations seasonally, diurnally, and above and below the exobase. The atmosphere is significantly more stable on the dayside than on the nightside. We have also found

  3. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    International Nuclear Information System (INIS)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y

    2008-01-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency

  4. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    Science.gov (United States)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.

    2008-07-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.

  5. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  6. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  7. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  8. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    Science.gov (United States)

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  9. Genome-scale regression analysis reveals a linear relationship for promoters and enhancers after combinatorial drug treatment

    KAUST Repository

    Rapakoulia, Trisevgeni

    2017-08-09

    Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens.

  10. A program to compute geographical positions of underwater artifact based on linear measurements

    Digital Repository Service at National Institute of Oceanography (India)

    Ganesan, P.

    System) or any hydrographic post processing software, excellent site plans and other related maps can be prepared on any convenient scale. This user friendly program enables the marine archaeologists to process their field measurements much faster...

  11. Cross Validated Temperament Scale Validities Computed Using Profile Similarity Metrics

    Science.gov (United States)

    2017-04-27

    ORGANIZATION NAME(S) AND ADDRESS(ES) U. S. Army Research Institute for the Behavioral & Social Sciences 6000 6TH Street (Bldg. 1464 / Mail...AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) U. S. Army Research Institute for the Behavioral & Social Sciences 6000 6TH...respondent’s scale score is equal to the mean of the non-reversed and recoded-reversed items. Table 1 portrays the conventional scoring algorithm on

  12. The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations

    International Nuclear Information System (INIS)

    Ashour-Abdalla, Maha

    2011-01-01

    To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R E to 23 R E in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R E and x = -15 R E . In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x ∼-10 R E ) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.

  13. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  14. Study of vibrations and stabilization of linear collider final doublets at the sub-nanometer scale

    International Nuclear Information System (INIS)

    Bolzon, B.

    2007-11-01

    CLIC is one of the current projects of high energy linear colliders. Vertical beam sizes of 0.7 nm at the time of the collision and fast ground motion of a few nanometers impose an active stabilization of the final doublets at a fifth of nanometer above 4 Hz. The majority of my work concerned vibrations and active stabilization study of cantilever and slim beams in order to be representative of the final doublets of CLIC. In a first part, measured performances of different types of vibration sensors associated to an appropriate instrumentation showed that accurate measurements of ground motion are possible from 0.1 Hz up to 2000 Hz on a quiet site. Also, electrochemical sensors answering a priori the specifications of CLIC can be incorporated in the active stabilization at a fifth of nanometer. In a second part, an experimental and numerical study of beam vibrations enabled to validate the efficiency of the numerical prediction incorporated then in the simulation of the active stabilization. Also, a study of the impact of ground motion and of acoustic noise on beam vibrations showed that an active stabilization is necessary at least up to 1000 Hz. In a third part, results on the active stabilization of a beam at its two first resonances are shown down to amplitudes of a tenth of nanometer above 4 Hz by using in parallel a commercial system performing passive and active stabilization of the clamping. The last part is related to a study of a support for the final doublets of a linear collider prototype in phase of finalization, the ATF2 prototype. This work showed that relative motion between this support and the ground is below imposed tolerances (6 nm above 0.1 Hz) with appropriate boundary conditions. (author)

  15. Guaranteed and computable bounds of the limit load for variational problems with linear growth energy functionals

    Czech Academy of Sciences Publication Activity Database

    Haslinger, Jaroslav; Repin, S.; Sysala, Stanislav

    2016-01-01

    Roč. 61, č. 5 (2016), s. 527-564 ISSN 0862-7940 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : functionals with linear growth * limit load * truncation method * perfect plasticity Subject RIV: BA - General Mathematics Impact factor: 0.618, year: 2016 http://link.springer.com/article/10.1007/s10492-016-0146-6

  16. An algorithm for computing the hull of the solution set of interval linear equations

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří

    2011-01-01

    Roč. 435, č. 2 (2011), s. 193-201 ISSN 0024-3795 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * interval hull * algorithm * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 0.974, year: 2011

  17. Computer programs for the solution of systems of linear algebraic equations

    Science.gov (United States)

    Sequi, W. T.

    1973-01-01

    FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.

  18. Theory and computation of disturbance invariant sets for discrete-time linear systems

    Directory of Open Access Journals (Sweden)

    Ilya Kolmanovsky

    1998-01-01

    . One purpose of the paper is to unite and extend in a rigorous way disparate results from the prior literature. In addition there are entirely new results. Specific contributions include: exploitation of the Pontryagin set difference to clarify conceptual matters and simplify mathematical developments, special properties of maximal invariant sets and conditions for their finite determination, algorithms for generating concrete representations of maximal invariant sets, practical computational questions, extension of the main results to general Lyapunov stable systems, applications of the computational techniques to the bounding of state and output response. Results on Lyapunov stable systems are applied to the implementation of a logic-based, nonlinear multimode regulator. For plants with disturbance inputs and state-control constraints it enlarges the constraint-admissible domain of attraction. Numerical examples illustrate the various theoretical and computational results.

  19. A reduced-scaling density matrix-based method for the computation of the vibrational Hessian matrix at the self-consistent field level

    International Nuclear Information System (INIS)

    Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian

    2015-01-01

    An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r −2 instead of r −1 . The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure

  20. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    International Nuclear Information System (INIS)

    Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta

    2014-01-01

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute

  1. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    Energy Technology Data Exchange (ETDEWEB)

    Lipparini, Filippo, E-mail: flippari@uni-mainz.de [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Scalmani, Giovanni; Frisch, Michael J. [Gaussian, Inc., 340 Quinnipiac St. Bldg. 40, Wallingford, Connecticut 06492 (United States); Lagardère, Louis [Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Stamm, Benjamin [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Cancès, Eric [Université Paris-Est, CERMICS, Ecole des Ponts and INRIA, 6 and 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2 (France); Maday, Yvon [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Institut Universitaire de France, Paris, France and Division of Applied Maths, Brown University, Providence, Rhode Island 02912 (United States); Piquemal, Jean-Philip [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Mennucci, Benedetta [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Via Risorgimento 35, 56126 Pisa (Italy)

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  2. Computing the Skewness of the Phylogenetic Mean Pairwise Distance in Linear Time

    DEFF Research Database (Denmark)

    Tsirogiannis, Constantinos; Sandel, Brody Steven

    2014-01-01

    The phylogenetic Mean Pairwise Distance (MPD) is one of the most popular measures for computing the phylogenetic distance between a given group of species. More specifically, for a phylogenetic tree and for a set of species R represented by a subset of the leaf nodes of , the MPD of R is equal...... to the average cost of all possible simple paths in that connect pairs of nodes in R. Among other phylogenetic measures, the MPD is used as a tool for deciding if the species of a given group R are closely related. To do this, it is important to compute not only the value of the MPD for this group but also...

  3. Neural Computations in a Dynamical System with Multiple Time Scales

    Directory of Open Access Journals (Sweden)

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  4. Exact spectrum of non-linear chirp scaling and its application in geosynchronous synthetic aperture radar imaging

    Directory of Open Access Journals (Sweden)

    Chen Qi

    2013-07-01

    Full Text Available Non-linear chirp scaling (NLCS is a feasible method to deal with time-variant frequency modulation (FM rate problem in synthetic aperture radar (SAR imaging. However, approximations in derivation of NLCS spectrum lead to performance decline in some cases. Presented is the exact spectrum of the NLCS function. Simulation with a geosynchronous synthetic aperture radar (GEO-SAR configuration is implemented. The results show that using the presented spectrum can significantly improve imaging performance, and the NLCS algorithm is suitable for GEO-SAR imaging after modification.

  5. Designing of analog computer prototype for linear differential equation. Pt. 2

    International Nuclear Information System (INIS)

    Tiyono Wijoyo.

    1978-01-01

    In this second report, the circuits of the system in the analog computer prototype have been modified and the system of the electromagnetic switches is used to replace the system of the manual switches which was used previously, so that the higher reliability could be achieved. (author)

  6. Computing the Skewness of the Phylogenetic Mean Pairwise Distance in Linear Time

    DEFF Research Database (Denmark)

    Tsirogiannis, Constantinos; Sandel, Brody Steven

    2013-01-01

    to the average cost of all possible simple paths in T that connect pairs of nodes in R. Among other phylogenetic measures, the MPD is used as a tool for deciding if the species of a given group R are closely related. To do this, it is important to compute not only the value of the MPD for this group but also...

  7. Use of personal computers in performing a linear modal analysis of a large finite-element model

    International Nuclear Information System (INIS)

    Wagenblast, G.R.

    1991-01-01

    This paper presents the use of personal computers in performing a dynamic frequency analysis of a large (2,801 degrees of freedom) finite-element model. Large model linear time history dynamic evaluations of safety related structures were previously restricted to mainframe computers using direct integration analysis methods. This restriction was a result of the limited memory and speed of personal computers. With the advances in memory capacity and speed of the personal computers, large finite-element problems now can be solved in the office in a timely and cost effective manner. Presented in three sections, this paper describes the procedure used to perform the dynamic frequency analysis of the large (2,801 degrees of freedom) finite-element model on a personal computer. Section 2.0 describes the structure and the finite-element model that was developed to represent the structure for use in the dynamic evaluation. Section 3.0 addresses the hardware and software used to perform the evaluation and the optimization of the hardware and software operating configuration to minimize the time required to perform the analysis. Section 4.0 explains the analysis techniques used to reduce the problem to a size compatible with the hardware and software memory capacity and configuration

  8. Large-scale computer-mediated training for management teachers

    Directory of Open Access Journals (Sweden)

    Gilly Salmon

    1997-01-01

    Full Text Available In 1995/6 the Open University Business School (OUBS trained 187 tutors in the UK and Continental Western Europe in Computer Mediated Conferencing (CMC for management education. The medium chosen for the training was FirstClassTM. In 1996/7 the OUBS trained a further 106 tutors in FirstClassTM using an improved version of the previous years training. The on line training was based on a previously developed model of learning on line. The model was tested both by means of the structure of the training programme and the improvements made. The training programme was evaluated and revised for the second cohort. Comparison was made between the two training programmes.

  9. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  10. Multiple linear regression to develop strength scaled equations for knee and elbow joints based on age, gender and segment mass

    DEFF Research Database (Denmark)

    D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar

    2012-01-01

    and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...... flexors. Results: Males were signifantly stronger than females across all age groups. Elbow peak torque (EPT) was better preserved from 60s to 70s whereas knee peak torque (KPT) reduced significantly (PGender, thigh mass and age best...... predicted KPT (R2=0.60). Gender, forearm mass and age best predicted EPT (R2=0.75). Good crossvalidation was established for both elbow and knee models. Conclusion: This cross-sectional study of muscle strength created and validated strength scaled equations of EPT and KPT using only gender, segment mass...

  11. Maintaining scale as a realiable computational system for criticality safety analysis

    International Nuclear Information System (INIS)

    Bowmann, S.M.; Parks, C.V.; Martin, S.K.

    1995-01-01

    Accurate and reliable computational methods are essential for nuclear criticality safety analyses. The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer code system was originally developed at Oak Ridge National Laboratory (ORNL) to enable users to easily set up and perform criticality safety analyses, as well as shielding, depletion, and heat transfer analyses. Over the fifteen-year life of SCALE, the mainstay of the system has been the criticality safety analysis sequences that have featured the KENO-IV and KENO-V.A Monte Carlo codes and the XSDRNPM one-dimensional discrete-ordinates code. The criticality safety analysis sequences provide automated material and problem-dependent resonance processing for each criticality calculation. This report details configuration management which is essential because SCALE consists of more than 25 computer codes (referred to as modules) that share libraries of commonly used subroutines. Changes to a single subroutine in some cases affect almost every module in SCALE exclamation point Controlled access to program source and executables and accurate documentation of modifications are essential to maintaining SCALE as a reliable code system. The modules and subroutine libraries in SCALE are programmed by a staff of approximately ten Code Managers. The SCALE Software Coordinator maintains the SCALE system and is the only person who modifies the production source, executables, and data libraries. All modifications must be authorized by the SCALE Project Leader prior to implementation

  12. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  13. An improved multiple linear regression and data analysis computer program package

    Science.gov (United States)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  14. Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations

    Science.gov (United States)

    Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.

    1995-01-01

    Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.

  15. Strength and reversibility of stereotypes for a rotary control with linear scales.

    Science.gov (United States)

    Chan, Alan H S; Chan, W H

    2008-02-01

    Using real mechanical controls, this experiment studied strength and reversibility of direction-of-motion stereotypes and response times for a rotary control with horizontal and vertical scales. Thirty-eight engineering undergraduates (34 men and 4 women) ages 23 to 47 years (M=29.8, SD=7.7) took part in the experiment voluntarily. The effects of instruction of change of pointer position and control plane on movement compatibility were analyzed with precise quantitative measures of strength and a reversibility index of stereotype. Comparisons of the strength and reversibility values of these two configurations with those of rotary control-circular display, rotary control-digital counter, four-way lever-circular display, and four-way lever-digital counter were made. The results of this study provided significant implications for the industrial design of control panels for improved human performance.

  16. Quantifying feedforward control: a linear scaling model for fingertip forces and object weight.

    Science.gov (United States)

    Lu, Ying; Bilaloglu, Seda; Aluru, Viswanath; Raghavan, Preeti

    2015-07-01

    The ability to predict the optimal fingertip forces according to object properties before the object is lifted is known as feedforward control, and it is thought to occur due to the formation of internal representations of the object's properties. The control of fingertip forces to objects of different weights has been studied extensively by using a custom-made grip device instrumented with force sensors. Feedforward control is measured by the rate of change of the vertical (load) force before the object is lifted. However, the precise relationship between the rate of change of load force and object weight and how it varies across healthy individuals in a population is not clearly understood. Using sets of 10 different weights, we have shown that there is a log-linear relationship between the fingertip load force rates and weight among neurologically intact individuals. We found that after one practice lift, as the weight increased, the peak load force rate (PLFR) increased by a fixed percentage, and this proportionality was common among the healthy subjects. However, at any given weight, the level of PLFR varied across individuals and was related to the efficiency of the muscles involved in lifting the object, in this case the wrist and finger extensor muscles. These results quantify feedforward control during grasp and lift among healthy individuals and provide new benchmarks to interpret data from neurologically impaired populations as well as a means to assess the effect of interventions on restoration of feedforward control and its relationship to muscular control. Copyright © 2015 the American Physiological Society.

  17. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  18. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  19. QALMA: A computational toolkit for the analysis of quality protocols for medical linear accelerators in radiation therapy

    Science.gov (United States)

    Rahman, Md Mushfiqur; Lei, Yu; Kalantzis, Georgios

    2018-01-01

    Quality Assurance (QA) for medical linear accelerator (linac) is one of the primary concerns in external beam radiation Therapy. Continued advancements in clinical accelerators and computer control technology make the QA procedures more complex and time consuming which often, adequate software accompanied with specific phantoms is required. To ameliorate that matter, we introduce QALMA (Quality Assurance for Linac with MATLAB), a MALAB toolkit which aims to simplify the quantitative analysis of QA for linac which includes Star-Shot analysis, Picket Fence test, Winston-Lutz test, Multileaf Collimator (MLC) log file analysis and verification of light & radiation field coincidence test.

  20. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  1. Elastic Multi-scale Mechanisms: Computation and Biological Evolution.

    Science.gov (United States)

    Diaz Ochoa, Juan G

    2018-01-01

    Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.

  2. Clinically Practical Approach for Screening of Low Muscularity Using Electronic Linear Measures on Computed Tomography Images in Critically Ill Patients.

    Science.gov (United States)

    Avrutin, Egor; Moisey, Lesley L; Zhang, Roselyn; Khattab, Jenna; Todd, Emma; Premji, Tahira; Kozar, Rosemary; Heyland, Daren K; Mourtzakis, Marina

    2017-12-06

    Computed tomography (CT) scans performed during routine hospital care offer the opportunity to quantify skeletal muscle and predict mortality and morbidity in intensive care unit (ICU) patients. Existing methods of muscle cross-sectional area (CSA) quantification require specialized software, training, and time commitment that may not be feasible in a clinical setting. In this article, we explore a new screening method to identify patients with low muscle mass. We analyzed 145 scans of elderly ICU patients (≥65 years old) using a combination of measures obtained with a digital ruler, commonly found on hospital radiological software. The psoas and paraspinal muscle groups at the level of the third lumbar vertebra (L3) were evaluated by using 2 linear measures each and compared with an established method of CT image analysis of total muscle CSA in the L3 region. There was a strong association between linear measures of psoas and paraspinal muscle groups and total L3 muscle CSA (R 2 = 0.745, P < 0.001). Linear measures, age, and sex were included as covariates in a multiple logistic regression to predict those with low muscle mass; receiver operating characteristic (ROC) area under the curve (AUC) of the combined psoas and paraspinal linear index model was 0.920. Intraclass correlation coefficients (ICCs) were used to evaluate intrarater and interrater reliability, resulting in scores of 0.979 (95% CI: 0.940-0.992) and 0.937 (95% CI: 0.828-0.978), respectively. A digital ruler can reliably predict L3 muscle CSA, and these linear measures may be used to identify critically ill patients with low muscularity who are at risk for worse clinical outcomes. © 2017 American Society for Parenteral and Enteral Nutrition.

  3. Computer simulation of plasma behavior in open-ended linear theta machines. Scientific report 81-5

    Energy Technology Data Exchange (ETDEWEB)

    Stover, E. K.

    1981-04-01

    Zero-dimensional and one-dimensional fluid plasma computer models have been developed to study the behavior of linear theta pinch plasmas. Computer simulation results generated from these codes are compared with data obtained from two theta pinch experiments so that significant machine plasma behavior can be identified. The experiments examined are a collisional experiment, T/sub i/ approx. 50 eV, n/sub e/ approx. 10/sup 17/ cm/sup -3/, where the plasma mean-free-path was significantly less than the plasma column length, and a hot ion species experiment, T/sub i/ approx. 3 keV, n/sub e/ approx. 10/sup 16/ cm/sup -3/, where the ion mean-free-path was on the order of the plasma column length.

  4. Computer simulation of plasma behavior in open-ended linear theta machines. Scientific report 81-5

    International Nuclear Information System (INIS)

    Stover, E.K.

    1981-04-01

    Zero-dimensional and one-dimensional fluid plasma computer models have been developed to study the behavior of linear theta pinch plasmas. Computer simulation results generated from these codes are compared with data obtained from two theta pinch experiments so that significant machine plasma behavior can be identified. The experiments examined are a collisional experiment, T/sub i/ approx. 50 eV, n/sub e/ approx. 10 17 cm -3 , where the plasma mean-free-path was significantly less than the plasma column length, and a hot ion species experiment, T/sub i/ approx. 3 keV, n/sub e/ approx. 10 16 cm -3 , where the ion mean-free-path was on the order of the plasma column length

  5. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  6. National Scale Rainfall Map Based on Linearly Interpolated Data from Automated Weather Stations and Rain Gauges

    Science.gov (United States)

    Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay

    2014-05-01

    In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the

  7. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    Science.gov (United States)

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  8. Naturalness in low-scale SUSY models and "non-linear" MSSM

    CERN Document Server

    Antoniadis, I; Ghilencea, D M

    2014-01-01

    In MSSM models with various boundary conditions for the soft breaking terms (m_{soft}) and for a higgs mass of 126 GeV, there is a (minimal) electroweak fine-tuning Delta\\approx 800 to 1000 for the constrained MSSM and Delta\\approx 500 for non-universal gaugino masses. These values, often regarded as unacceptably large, may indicate a problem of supersymmetry (SUSY) breaking, rather than of SUSY itself. A minimal modification of these models is to lower the SUSY breaking scale in the hidden sector (\\sqrt f) to few TeV, which we show to restore naturalness to more acceptable levels Delta\\approx 80 for the most conservative case of low tan_beta and ultraviolet boundary conditions as in the constrained MSSM. This is done without introducing additional fields in the visible sector, unlike other models that attempt to reduce Delta. In the present case Delta is reduced due to additional (effective) quartic higgs couplings proportional to the ratio m_{soft}/(\\sqrt f) of the visible to the hidden sector SUSY breaking...

  9. The large-scale gravitational bias from the quasi-linear regime.

    Science.gov (United States)

    Bernardeau, F.

    1996-08-01

    It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by _c_=C_p,q_ ^p+q-2^ where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.

  10. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed

  11. Improved Linear Algebra Methods for Redshift Computation from Limited Spectrum Data - II

    Science.gov (United States)

    Foster, Leslie; Waagen, Alex; Aijaz, Nabella; Hurley, Michael; Luis, Apolo; Rinsky, Joel; Satyavolu, Chandrika; Gazis, Paul; Srivastava, Ashok; Way, Michael

    2008-01-01

    Given photometric broadband measurements of a galaxy, Gaussian processes may be used with a training set to solve the regression problem of approximating the redshift of this galaxy. However, in practice solving the traditional Gaussian processes equation is too slow and requires too much memory. We employed several methods to avoid this difficulty using algebraic manipulation and low-rank approximation, and were able to quickly approximate the redshifts in our testing data within 17 percent of the known true values using limited computational resources. The accuracy of one method, the V Formulation, is comparable to the accuracy of the best methods currently used for this problem.

  12. Non-linear HVAC computations using least square support vector machines

    International Nuclear Information System (INIS)

    Kumar, Mahendra; Kar, I.N.

    2009-01-01

    This paper aims to demonstrate application of least square support vector machines (LS-SVM) to model two complex heating, ventilating and air-conditioning (HVAC) relationships. The two applications considered are the estimation of the predicted mean vote (PMV) for thermal comfort and the generation of psychrometric chart. LS-SVM has the potential for quick, exact representations and also possesses a structure that facilitates hardware implementation. The results show very good agreement between function values computed from conventional model and LS-SVM model in real time. The robustness of LS-SVM models against input noises has also been analyzed.

  13. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  14. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  15. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  16. Non-linear heat transfer computer code by finite element method

    International Nuclear Information System (INIS)

    Nagato, Kotaro; Takikawa, Noboru

    1977-01-01

    The computer code THETA-2D for the calculation of temperature distribution by the two-dimensional finite element method was made for the analysis of heat transfer in a high temperature structure. Numerical experiment was performed for the numerical integration of the differential equation of heat conduction. The Runge-Kutta method of the numerical experiment produced an unstable solution. A stable solution was obtained by the β method with the β value of 0.35. In high temperature structures, the radiative heat transfer can not be neglected. To introduce a term of the radiative heat transfer, a functional neglecting the radiative heat transfer was derived at first. Then, the radiative term was added after the discretion by variation method. Five model calculations were carried out by the computer code. Calculation of steady heat conduction was performed. When estimated initial temperature is 1,000 degree C, reasonable heat blance was obtained. In case of steady-unsteady temperature calculation, the time integral by THETA-2D turned out to be under-estimation for enthalpy change. With a one-dimensional model, the temperature distribution in a structure, in which heat conductivity is dependent on temperature, was calculated. Calculation with a model which has a void inside was performed. Finally, model calculation for a complex system was carried out. (Kato, T.)

  17. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  18. Intra- and inter-observer variability and accuracy in the determination of linear and angular measurements in computed tomography

    International Nuclear Information System (INIS)

    Christiansen, E.L.; Thompson, J.R.; Kopp, S.

    1986-01-01

    The observer variability and accuracy of linear and angular computed tomography (CT) software measurements in the transaxial plane were investigated for the temporomandibular joint with the General Electric 8800 CT/N Scanner. A dried and measured human mandible was embedded in plastic and scanned in vitro. Sixteen observers participated in the study. The following measurements were tested: inter- and extra-condylar distances, transverse condylar dimension, condylar angulation, and the plastic base of the specimen. Three frozen cadaveric heads were similarly scanned and measured in situ. Intra- and inter-observer variabilities were lowest for the specimen base and highest for condylar angulation. Neuroradiologists had the lowest variability as a group, and the radiology residents and paramedical personell had the highest, but the differences were small. No significant difference was found between CT and macroscopic measurement of the mandible. In situ measurement by CT of condyles with structural changes in the transaxial plane was, however, subject to substantial error. It was concluded that transaxial linear measurements of the condylar processes free of significant structural changes had an error and an accuracy well within acceptable limits. The error for angular measurements was significantly greater than the error for linear measurements

  19. Mathematical Model and Computational Analysis of Selected Transient States of Cylindrical Linear Induction Motor Fed via Frequency Converter

    Directory of Open Access Journals (Sweden)

    Andrzej Rusek

    2008-01-01

    Full Text Available The mathematical model of cylindrical linear induction motor (C-LIM fed via frequency converter is presented in the paper. The model was developed in order to analyze numerically the transient states. Problems concerning dynamics of ac-machines especially linear induction motor are presented in [1 – 7]. Development of C-LIM mathematical model is based on circuit method and analogy to rotary induction motor. The analogy between both: (a stator and rotor windings of rotary induction motor and (b winding of primary part of C-LIM (inductor and closed current circuits in external secondary part of C-LIM (race is taken into consideration. The equations of C-LIM mathematical model are presented as matrix together with equations expressing each vector separately. A computational analysis of selected transient states of C-LIM fed via frequency converter is presented in the paper. Two typical examples of C-LIM operation are considered for the analysis: (a starting the motor at various static loads and various synchronous velocities and (b reverse of the motor at the same operation conditions. Results of simulation are presented as transient responses including transient electromagnetic force, transient linear velocity and transient phase current.

  20. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  1. Quantum triangulations moduli space, quantum computing, non-linear sigma models and Ricci flow

    CERN Document Server

    Carfora, Mauro

    2017-01-01

    This book discusses key conceptual aspects and explores the connection between triangulated manifolds and quantum physics, using a set of case studies ranging from moduli space theory to quantum computing to provide an accessible introduction to this topic. Research on polyhedral manifolds often reveals unexpected connections between very distinct aspects of mathematics and physics. In particular, triangulated manifolds play an important role in settings such as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, critical phenomena and complex systems. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is also often a consequence of an underlying structure that naturally calls into play non-trivial aspects of representation theory, complex analysis and topology in a way that makes the basic geometric structures of the physical interactions involv...

  2. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    Science.gov (United States)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  3. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  4. Preliminary Development of a Free Piston Expander–Linear Generator for Small-Scale Organic Rankine Cycle (ORC Waste Heat Recovery System

    Directory of Open Access Journals (Sweden)

    Gaosheng Li

    2016-04-01

    Full Text Available A novel free piston expander-linear generator (FPE-LG integrated unit was proposed to recover waste heat efficiently from vehicle engine. This integrated unit can be used in a small-scale Organic Rankine Cycle (ORC system and can directly convert the thermodynamic energy of working fluid into electric energy. The conceptual design of the free piston expander (FPE was introduced and discussed. A cam plate and the corresponding valve train were used to control the inlet and outlet valve timing of the FPE. The working principle of the FPE-LG was proven to be feasible using an air test rig. The indicated efficiency of the FPE was obtained from the p–V indicator diagram. The dynamic characteristics of the in-cylinder flow field during the intake and exhaust processes of the FPE were analyzed based on Fluent software and 3D numerical simulation models using a computation fluid dynamics method. Results show that the indicated efficiency of the FPE can reach 66.2% and the maximal electric power output of the FPE-LG can reach 22.7 W when the working frequency is 3 Hz and intake pressure is 0.2 MPa. Two large-scale vortices are formed during the intake process because of the non-uniform distribution of velocity and pressure. The vortex flow will convert pressure energy and kinetic energy into thermodynamic energy for the working fluid, which weakens the power capacity of the working fluid.

  5. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  6. Linear stability of resistive MHD modes: axisymmetric toroidal computation of the outer region matching data

    International Nuclear Information System (INIS)

    Pletzer, A.; Bondeson, A.; Dewar, R.L.

    1993-11-01

    The quest to determine accurately the stability of tearing and resistive interchange modes in two-dimensional toroidal geometry led to the development of the PEST-3 code, which is based on solving the singular, zero-frequency ideal MHD equation in the plasma bulk and determining the outer data Δ', Γ' and A' needed to match the outer region solutions to those arising in the inner layers. No assumption regarding the aspect ratio, the number of rational surfaces or the pressure are made a priori. This approach is numerically less demanding than solving the full set of resistive equations, and has the major advantage of non-MHD theories of the non-ideal layers. Good convergence is ensured by the variational Galerkin scheme used to compute the outer matching data. To validate the code, we focus on the growth rate calculations of resistive kink modes which are reproduced in good agreement with those obtained by the full resistive MHD code MARS. (author) 11 figs., 27 refs

  7. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  8. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    Science.gov (United States)

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  9. Exploiting the atmosphere's memory for monthly, seasonal and interannual temperature forecasting using Scaling LInear Macroweather Model (SLIMM)

    Science.gov (United States)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2016-04-01

    Traditionally, most of the models for prediction of the atmosphere behavior in the macroweather and climate regimes follow a deterministic approach. However, modern ensemble forecasting systems using stochastic parameterizations are in fact deterministic/ stochastic hybrids that combine both elements to yield a statistical distribution of future atmospheric states. Nevertheless, the result is both highly complex (both numerically and theoretically) as well as being theoretically eclectic. In principle, it should be advantageous to exploit higher level turbulence type scaling laws. Concretely, in the case for the Global Circulation Models (GCM's), due to sensitive dependence on initial conditions, there is a deterministic predictability limit of the order of 10 days. When these models are coupled with ocean, cryosphere and other process models to make long range, climate forecasts, the high frequency "weather" is treated as a driving noise in the integration of the modelling equations. Following Hasselman, 1976, this has led to stochastic models that directly generate the noise, and model the low frequencies using systems of integer ordered linear ordinary differential equations, the most well-known are the Linear Inverse Models (LIM). For annual global scale forecasts, they are somewhat superior to the GCM's and have been presented as a benchmark for surface temperature forecasts with horizons up to decades. A key limitation for the LIM approach is that it assumes that the temperature has only short range (exponential) decorrelations. In contrast, an increasing body of evidence shows that - as with the models - the atmosphere respects a scale invariance symmetry leading to power laws with potentially enormous memories so that LIM greatly underestimates the memory of the system. In this talk we show that, due to the relatively low macroweather intermittency, the simplest scaling models - fractional Gaussian noise - can be used for making greatly improved forecasts

  10. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  11. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  12. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    Science.gov (United States)

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  13. Performing three-dimensional neutral particle transport calculations on tera scale computers

    International Nuclear Information System (INIS)

    Woodward, C.S.; Brown, P.N.; Chang, B.; Dorr, M.R.; Hanebutte, U.R.

    1999-01-01

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL)

  14. A review of parallel computing for large-scale remote sensing image mosaicking

    OpenAIRE

    Chen, Lajiao; Ma, Yan; Liu, Peng; Wei, Jingbo; Jie, Wei; He, Jijun

    2015-01-01

    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further ...

  15. Computational challenges of large-scale, long-time, first-principles molecular dynamics

    International Nuclear Information System (INIS)

    Kent, P R C

    2008-01-01

    Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations

  16. Accuracy and reliability of linear cephalometric measurements from cone-beam computed tomography scans of a dry human skull.

    Science.gov (United States)

    Berco, Mauricio; Rigali, Paul H; Miner, R Matthew; DeLuca, Stephelynn; Anderson, Nina K; Will, Leslie A

    2009-07-01

    The purpose of this study was to determine the accuracy and reliability of 3-dimensional craniofacial measurements obtained from cone-beam computed tomography (CBCT) scans of a dry human skull. Seventeen landmarks were identified on the skull. CBCT scans were then obtained, with 2 skull orientations during scanning. Twenty-nine interlandmark linear measurements were made directly on the skull and compared with the same measurements made on the CBCT scans. All measurements were made by 2 operators on 4 separate occasions. The method errors were 0.19, 0.21, and 0.19 mm in the x-, y- and z-axes, respectively. Repeated measures analysis of variance (ANOVA) showed no significant intraoperator or interoperator differences. The mean measurement error was -0.01 mm (SD, 0.129 mm). Five measurement errors were found to be statistically significantly different; however, all measurement errors were below the known voxel size and clinically insignificant. No differences were found in the measurements from the 2 CBCT scan orientations of the skull. CBCT allows for clinically accurate and reliable 3-dimensional linear measurements of the craniofacial complex. Moreover, skull orientation during CBCT scanning does not affect the accuracy or the reliability of these measurements.

  17. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    Science.gov (United States)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  18. Design, development and integration of a large scale multiple source X-ray computed tomography system

    International Nuclear Information System (INIS)

    Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng

    2013-01-01

    X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

  19. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  20. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  1. An accurate and computationally efficient small-scale nonlinear FEA of flexible risers

    OpenAIRE

    Rahmati, MT; Bahai, H; Alfano, G

    2016-01-01

    This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...

  2. Multi-scale data visualization for computational astrophysics and climate dynamics at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Ahern, Sean; Daniel, Jamison R; Gao, Jinzhu; Ostrouchov, George; Toedte, Ross J; Wang, Chaoli

    2006-01-01

    Computational astrophysics and climate dynamics are two principal application foci at the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL). We identify a dataset frontier that is shared by several SciDAC computational science domains and present an exploration of traditional production visualization techniques enhanced with new enabling research technologies such as advanced parallel occlusion culling and high resolution small multiples statistical analysis. In collaboration with our research partners, these techniques will allow the visual exploration of a new generation of peta-scale datasets that cross this data frontier along all axes

  3. MATH INSTRUCTIONAL MEDIA DESIGN USING COMPUTER FOR COMPLETION OF TWO-VARIABLES LINEAR EQUATION SYSTEM BY ELIMINATION METHOD

    Directory of Open Access Journals (Sweden)

    Nurbaiti

    2017-03-01

    Full Text Available Science and technology have been rapidly evolved in some fields of knowledge, including mathematics. Such development can contribute to improvements on the learning process that encourage students and teachers to enhance their abilities and performances. In delivering the material on the linear equation system with two variables (SPLDV, the conventional teaching method where teachers become the center of the learning process is still well-practiced. This method would cause the students get bored and have difficulties to understand the concepts they are learning. Therefore, in order to the learning of SPLDV easy, an interesting, interactive media that the students and teachers can apply is necessary. This media is designed using GUI MATLAB and named as students’ electronic worksheets (e-LKS. This program is intended to help students in finding and understanding the SPLDV concepts more easily. This program is also expected to improve students’ motivation and creativity in learning the material. Based on the test using the System Usability Scale (SUS, the design of interactive mathematics learning media of the linear equation system with Two Variables (SPLDV gets grade B (excellent, meaning that this learning media is proper to be used for Junior High School students of grade VIII.

  4. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  5. On the Evaluation of Computational Results Obtained from Solving System of linear Equations With matlab The Dual affine Scalling interior Point

    International Nuclear Information System (INIS)

    Murfi, Hendri; Basaruddin, T.

    2001-01-01

    The interior point method for linear programming has gained extraordinary interest as an alternative to simplex method since Karmarkar presented a polynomial-time algorithm for linear programming based on interior point method. In implementation of the algorithm of this method, there are two important things that have impact heavily to performance of the algorithm; they are data structure and used method to solve linear equation system in the algorithm. This paper describes about solving linear equation system in variants of the algorithm called dual-affine scaling algorithm. Next, we evaluate experimentally results of some used methods, either direct method or iterative method. The experimental evaluation used Matlab

  6. Interaural Level Difference Dependent Gain Control and Synaptic Scaling Underlying Binaural Computation

    Science.gov (United States)

    Xiong, Xiaorui R.; Liang, Feixue; Li, Haifu; Mesik, Lukas; Zhang, Ke K.; Polley, Daniel B.; Tao, Huizhong W.; Xiao, Zhongju; Zhang, Li I.

    2013-01-01

    Binaural integration in the central nucleus of inferior colliculus (ICC) plays a critical role in sound localization. However, its arithmetic nature and underlying synaptic mechanisms remain unclear. Here, we showed in mouse ICC neurons that the contralateral dominance is created by a “push-pull”-like mechanism, with contralaterally dominant excitation and more bilaterally balanced inhibition. Importantly, binaural spiking response is generated apparently from an ipsilaterally-mediated scaling of contralateral response, leaving frequency tuning unchanged. This scaling effect is attributed to a divisive attenuation of contralaterally-evoked synaptic excitation onto ICC neurons with their inhibition largely unaffected. Thus, a gain control mediates the linear transformation from monaural to binaural spike responses. The gain value is modulated by interaural level difference (ILD) primarily through scaling excitation to different levels. The ILD-dependent synaptic scaling and gain adjustment allow ICC neurons to dynamically encode interaural sound localization cues while maintaining an invariant representation of other independent sound attributes. PMID:23972599

  7. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  8. THE STRUCTURE AND LINEAR POLARIZATION OF THE KILOPARSEC-SCALE JET OF THE QUASAR 3C 345

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, David H.; Wardle, John F. C.; Marchenko, Valerie V., E-mail: roberts@brandeis.edu [Department of Physics MS-057, Brandeis University, Waltham, MA 02454-0911 (United States)

    2013-02-01

    Deep Very Large Array imaging of the quasar 3C 345 at 4.86 and 8.44 GHz has been used to study the structure and linear polarization of its radio jet on scales ranging from 2 to 30 kpc. There is a 7-8 Jy unresolved core with spectral index {alpha} {approx_equal} -0.24 (I{sub {nu}}{proportional_to}{nu}{sup {alpha}}). The jet (typical intensity 15 mJy beam{sup -1}) consists of a 2.''5 straight section containing two knots, and two additional non-co-linear knots at the end. The jet's total projected length is about 27 kpc. The spectral index of the jet varies over -1.1 {approx}< {alpha} {approx}< -0.5. The jet diverges with a semi-opening angle of about 9 Degree-Sign , and is nearly constant in integrated brightness over its length. A faint feature northeast of the core does not appear to be a true counter-jet, but rather an extended lobe of this FR-II radio source seen in projection. The absence of a counter-jet is sufficient to place modest constraints on the speed of the jet on these scales, requiring {beta} {approx}> 0.5. Despite the indication of jet precession in the total intensity structure, the polarization images suggest instead a jet re-directed at least twice by collisions with the external medium. Surprisingly, the electric vector position angles in the main body of the jet are neither longitudinal nor transverse, but make an angle of about 55 Degree-Sign with the jet axis in the middle while along the edges the vectors are transverse, suggesting a helical magnetic field. There is no significant Faraday rotation in the source, so that is not the cause of the twist. The fractional polarization in the jet averages 25% and is higher at the edges. In a companion paper, Roberts and Wardle show that differential Doppler boosting in a diverging relativistic velocity field can explain the electric vector pattern in the jet.

  9. Inferring Large-Scale Terrestrial Water Storage Through GRACE and GPS Data Fusion in Cloud Computing Environments

    Science.gov (United States)

    Rude, C. M.; Li, J. D.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2016-12-01

    Surface subsidence due to depletion of groundwater can lead to permanent compaction of aquifers and damaged infrastructure. However, studies of such effects on a large scale are challenging and compute intensive because they involve fusing a variety of data sets beyond direct measurements from groundwater wells, such as gravity change measurements from the Gravity Recovery and Climate Experiment (GRACE) or surface displacements measured by GPS receivers. Our work therefore leverages Amazon cloud computing to enable these types of analyses spanning the entire continental US. Changes in groundwater storage are inferred from surface displacements measured by GPS receivers stationed throughout the country. Receivers located on bedrock are anti-correlated with changes in water levels from elastic deformation due to loading, while stations on aquifers correlate with groundwater changes due to poroelastic expansion and compaction. Correlating linearly detrended equivalent water thickness measurements from GRACE with linearly detrended and Kalman filtered vertical displacements of GPS stations located throughout the United States helps compensate for the spatial and temporal limitations of GRACE. Our results show that the majority of GPS stations are negatively correlated with GRACE in a statistically relevant way, as most GPS stations are located on bedrock in order to provide stable reference locations and measure geophysical processes such as tectonic deformations. Additionally, stations located on the Central Valley California aquifer show statistically significant positive correlations. Through the identification of positive and negative correlations, deformation phenomena can be classified as loading or poroelastic expansion due to changes in groundwater. This method facilitates further studies of terrestrial water storage on a global scale. This work is supported by NASA AIST-NNX15AG84G (PI: V. Pankratius) and Amazon.

  10. Large scale inverse problems computational methods and applications in the earth sciences

    CERN Document Server

    Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan

    2013-01-01

    This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.

  11. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  12. Electronic cleansing for computed tomography (CT) colonography using a scale-invariant three-material model

    NARCIS (Netherlands)

    Serlie, Iwo W. O.; Vos, Frans M.; Truyen, Roel; Post, Frits H.; Stoker, Jaap; van Vliet, Lucas J.

    2010-01-01

    A well-known reading pitfall in computed tomography (CT) colonography is posed by artifacts at T-junctions, i.e., locations where air-fluid levels interface with the colon wall. This paper presents a scale-invariant method to determine material fractions in voxels near such T-junctions. The proposed

  13. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  14. Large-scale computation in solid state physics - Recent developments and prospects

    International Nuclear Information System (INIS)

    DeVreese, J.T.

    1985-01-01

    During the past few years an increasing interest in large-scale computation is developing. Several initiatives were taken to evaluate and exploit the potential of ''supercomputers'' like the CRAY-1 (or XMP) or the CYBER-205. In the U.S.A., there first appeared the Lax report in 1982 and subsequently (1984) the National Science Foundation in the U.S.A. announced a program to promote large-scale computation at the universities. Also, in Europe several CRAY- and CYBER-205 systems have been installed. Although the presently available mainframes are the result of a continuous growth in speed and memory, they might have induced a discontinuous transition in the evolution of the scientific method; between theory and experiment a third methodology, ''computational science'', has become or is becoming operational

  15. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  16. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  17. The stability of mechanical calibration for a kV cone beam computed tomography system integrated with linear accelerator

    International Nuclear Information System (INIS)

    Sharpe, Michael B.; Moseley, Douglas J.; Purdie, Thomas G.

    2006-01-01

    The geometric accuracy and precision of an image-guided treatment system were assessed. Image guidance is performed using an x-ray volume imaging (XVI) system integrated with a linear accelerator and treatment planning system. Using an amorphous silicon detector and x-ray tube, volumetric computed tomography images are reconstructed from kilovoltage radiographs by filtered backprojection. Image fusion and assessment of geometric targeting are supported by the treatment planning system. To assess the limiting accuracy and precision of image-guided treatment delivery, a rigid spherical target embedded in an opaque phantom was subjected to 21 treatment sessions over a three-month period. For each session, a volumetric data set was acquired and loaded directly into an active treatment planning session. Image fusion was used to ascertain the couch correction required to position the target at the prescribed iso-center. Corrections were validated independently using megavoltage electronic portal imaging to record the target position with respect to symmetric treatment beam apertures. An initial calibration cycle followed by repeated image-guidance sessions demonstrated the XVI system could be used to relocate an unambiguous object to within less than 1 mm of the prescribed location. Treatment could then proceed within the mechanical accuracy and precision of the delivery system. The calibration procedure maintained excellent spatial resolution and delivery precision over the duration of this study, while the linear accelerator was in routine clinical use. Based on these results, the mechanical accuracy and precision of the system are ideal for supporting high-precision localization and treatment of soft-tissue targets

  18. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  19. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  20. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  1. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  2. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  3. On linear correlation between interfacial tension of water-solvent interface solubility of water in organic solvents and parameters of diluent effect scale

    International Nuclear Information System (INIS)

    Mezhov, Eh.A.; Khananashvili, N.L.; Shmidt, V.S.

    1988-01-01

    Presence of linear correlation between water solubility in nonmiscible with it organic solvents, interfacial tension of water-solvent interface, on the one hand, and solvent effect scale parameters and these solvents π* - on the other hand, is established. It allows, using certain tabular parameters of solvent effect or each solvent π*, to predict values of interfacial tension and water solubility for corresponding systems. It is shown, that solvent effect scale allows to predict values more accurately, than other known solvent scales, as it in contrast to other scales characterizes solvents, which are in equilibrium with water

  4. Computer-aided mass detection in mammography: False positive reduction via gray-scale invariant ranklet texture features

    International Nuclear Information System (INIS)

    Masotti, Matteo; Lanconelli, Nico; Campanini, Renato

    2009-01-01

    In this work, gray-scale invariant ranklet texture features are proposed for false positive reduction (FPR) in computer-aided detection (CAD) of breast masses. Two main considerations are at the basis of this proposal. First, false positive (FP) marks surviving our previous CAD system seem to be characterized by specific texture properties that can be used to discriminate them from masses. Second, our previous CAD system achieves invariance to linear/nonlinear monotonic gray-scale transformations by encoding regions of interest into ranklet images through the ranklet transform, an image transformation similar to the wavelet transform, yet dealing with pixels' ranks rather than with their gray-scale values. Therefore, the new FPR approach proposed herein defines a set of texture features which are calculated directly from the ranklet images corresponding to the regions of interest surviving our previous CAD system, hence, ranklet texture features; then, a support vector machine (SVM) classifier is used for discrimination. As a result of this approach, texture-based information is used to discriminate FP marks surviving our previous CAD system; at the same time, invariance to linear/nonlinear monotonic gray-scale transformations of the new CAD system is guaranteed, as ranklet texture features are calculated from ranklet images that have this property themselves by construction. To emphasize the gray-scale invariance of both the previous and new CAD systems, training and testing are carried out without any in-between parameters' adjustment on mammograms having different gray-scale dynamics; in particular, training is carried out on analog digitized mammograms taken from a publicly available digital database, whereas testing is performed on full-field digital mammograms taken from an in-house database. Free-response receiver operating characteristic (FROC) curve analysis of the two CAD systems demonstrates that the new approach achieves a higher reduction of FP marks

  5. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  7. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  8. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    Science.gov (United States)

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  9. Real-time axial motion detection and correction for single photon emission computed tomography using a linear prediction filter

    International Nuclear Information System (INIS)

    Saba, V.; Setayeshi, S.; Ghannadi-Maragheh, M.

    2011-01-01

    We have developed an algorithm for real-time detection and complete correction of the patient motion effects during single photon emission computed tomography. The algorithm is based on a linear prediction filter (LPC). The new prediction of projection data algorithm (PPDA) detects most motions-such as those of the head, legs, and hands-using comparison of the predicted and measured frame data. When the data acquisition for a specific frame is completed, the accuracy of the acquired data is evaluated by the PPDA. If patient motion is detected, the scanning procedure is stopped. After the patient rests in his or her true position, data acquisition is repeated only for the corrupted frame and the scanning procedure is continued. Various experimental data were used to validate the motion detection algorithm; on the whole, the proposed method was tested with approximately 100 test cases. The PPDA shows promising results. Using the PPDA enables us to prevent the scanner from collecting disturbed data during the scan and replaces them with motion-free data by real-time rescanning for the corrupted frames. As a result, the effects of patient motion is corrected in real time. (author)

  10. Diagnostic accuracy of full-body linear X-ray scanning in multiple trauma patients in comparison to computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Joeres, A.P.W.; Heverhagen, J.T.; Bonel, H. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Exadaktylos, A. [Inselspital - University Hospital Bern (Switzerland). Dept. of Emergency Medicine; Klink, T. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Wuerzburg Univ. (Germany). Inst. of Diagnostic and Interventional Radiology

    2016-02-15

    The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2%, the specificity was 93.3%, the positive predictive value was 91%, and the negative predictive value was 57.5%. The overall sensitivity for vertebral fractures was 16.7%, and the specificity was 100%. The sensitivity was 48.7% and the specificity 98.2% for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS.40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT.

  11. Comparative evaluation of the accuracy of linear measurements between cone beam computed tomography and 3D microtomography

    Directory of Open Access Journals (Sweden)

    Francesca Mangione

    2013-09-01

    Full Text Available OBJECTIVE: The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT system used in dental clinical practice, by comparing it with microCT system as standard reference. MATERIALS AND METHODS: Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. RESULTS AND DISCUSSION: Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. CONCLUSION: With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.

  12. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    OpenAIRE

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2017-01-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML= 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation bet...

  13. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  14. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  15. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  16. A new way of estimating compute-boundedness and its application to dynamic voltage scaling

    DEFF Research Database (Denmark)

    Venkatachalam, Vasanth; Franz, Michael; Probst, Christian W.

    2007-01-01

    Many dynamic voltage scaling algorithms rely on measuring hardware events (such as cache misses) for predicting how much a workload can be slowed down with acceptable performance loss. The events measured, however, are at best indirectly related to execution time and clock frequency. By relating...... these two indicators logically, we propose a new way of predicting a workload's compute-boundedness that is based on direct observation, and only requires measuring the total execution cycles for the two highest clock frequencies. Our predictor can be used to develop dynamic voltage scaling algorithms...

  17. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    International Nuclear Information System (INIS)

    Shi Siqi; Zhao Yan; Wu Qu; Gao Jian; Liu Yue; Ju Wangwei; Ouyang Chuying; Xiao Ruijuan

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. (topical review)

  18. Advancing nanoelectronic device modeling through peta-scale computing and deployment on nanoHUB

    International Nuclear Information System (INIS)

    Haley, Benjamin P; Luisier, Mathieu; Klimeck, Gerhard; Lee, Sunhee; Ryu, Hoon; Bae, Hansang; Saied, Faisal; Clark, Steve

    2009-01-01

    Recent improvements to existing HPC codes NEMO 3-D and OMEN, combined with access to peta-scale computing resources, have enabled realistic device engineering simulations that were previously infeasible. NEMO 3-D can now simulate 1 billion atom systems, and, using 3D spatial decomposition, scale to 32768 cores. Simulation time for the band structure of an experimental P doped Si quantum computing device fell from 40 minutes to 1 minute. OMEN can perform fully quantum mechanical transport calculations for real-word UTB FETs on 147,456 cores in roughly 5 minutes. Both of these tools power simulation engines on the nanoHUB, giving the community access to previously unavailable research capabilities.

  19. Multi-Scale Computational Modeling of Ni-Base Superalloy Brazed Joints for Gas Turbine Applications

    Science.gov (United States)

    Riggs, Bryan

    Brazed joints are commonly used in the manufacture and repair of aerospace components including high temperature gas turbine components made of Ni-base superalloys. For such critical applications, it is becoming increasingly important to account for the mechanical strength and reliability of the brazed joint. However, material properties of brazed joints are not readily available and methods for evaluating joint strength such as those listed in AWS C3.2 have inherent challenges compared with testing bulk materials. In addition, joint strength can be strongly influenced by the degree of interaction between the filler metal (FM) and the base metal (BM), the joint design, and presence of flaws or defects. As a result, there is interest in the development of a multi-scale computational model to predict the overall mechanical behavior and fitness-for-service of brazed joints. Therefore, the aim of this investigation was to generate data and methodology to support such a model for Ni-base superalloy brazed joints with conventional Ni-Cr-B based FMs. Based on a review of the technical literature a multi-scale modeling approach was proposed to predict the overall performance of brazed joints by relating mechanical properties to the brazed joint microstructure. This approach incorporates metallurgical characterization, thermodynamic/kinetic simulations, mechanical testing, fracture mechanics and finite element analysis (FEA) modeling to estimate joint properties based on the initial BM/FM composition and brazing process parameters. Experimental work was carried out in each of these areas to validate the multi-scale approach and develop improved techniques for quantifying brazed joint properties. Two Ni-base superalloys often used in gas turbine applications, Inconel 718 and CMSX-4, were selected for study and vacuum furnace brazed using two common FMs, BNi-2 and BNi-9. Metallurgical characterization of these brazed joints showed two primary microstructural regions; a soft

  20. A Multi-Scale Computational Study on the Mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic)

    OpenAIRE

    Ion, Bogdan; Kazim, Erum; Gauld, James

    2014-01-01

    Nicotinamidase (Nic) is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic). In particular, density functional theory (DFT), molecular dynamics (MD) and ONIOM quantum mechanics/molecular mechanics (QM/MM) methods have been employed. The o...

  1. Standing Together for Reproducibility in Large-Scale Computing: Report on reproducibility@XSEDE

    OpenAIRE

    James, Doug; Wilkins-Diehr, Nancy; Stodden, Victoria; Colbry, Dirk; Rosales, Carlos; Fahey, Mark; Shi, Justin; Silva, Rafael F.; Lee, Kyo; Roskies, Ralph; Loewe, Laurence; Lindsey, Susan; Kooper, Rob; Barba, Lorena; Bailey, David

    2014-01-01

    This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enab...

  2. Multi-scale computational model of three-dimensional hemodynamics within a deformable full-body arterial network

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Nan [Department of Bioengineering, Stanford University, Stanford, CA 94305 (United States); Department of Biomedical Engineering, King’s College London, London SE1 7EH (United Kingdom); Humphrey, Jay D. [Department of Biomedical Engineering, Yale University, New Haven, CT 06520 (United States); Figueroa, C. Alberto, E-mail: alberto.figueroa@kcl.ac.uk [Department of Biomedical Engineering, King’s College London, London SE1 7EH (United Kingdom)

    2013-07-01

    In this article, we present a computational multi-scale model of fully three-dimensional and unsteady hemodynamics within the primary large arteries in the human. Computed tomography image data from two different patients were used to reconstruct a nearly complete network of the major arteries from head to foot. A linearized coupled-momentum method for fluid–structure-interaction was used to describe vessel wall deformability and a multi-domain method for outflow boundary condition specification was used to account for the distal circulation. We demonstrated that physiologically realistic results can be obtained from the model by comparing simulated quantities such as regional blood flow, pressure and flow waveforms, and pulse wave velocities to known values in the literature. We also simulated the impact of age-related arterial stiffening on wave propagation phenomena by progressively increasing the stiffness of the central arteries and found that the predicted effects on pressure amplification and pulse wave velocity are in agreement with findings in the clinical literature. This work demonstrates the feasibility of three-dimensional techniques for simulating hemodynamics in a full-body compliant arterial network.

  3. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  4. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  5. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  6. Computer aided heat transfer analysis in a laboratory scaled heat exchanger unit

    International Nuclear Information System (INIS)

    Gunes, M.

    1998-01-01

    In this study. an explanation of a laboratory scaled heat exchanger unit and a software which is developed to analyze heat transfer. especially to use it in heat transfer courses, are represented. Analyses carried out in the software through sample values measured in the heat exchanger are: (l) Determination of heat transfer rate, logarithmic mean temperature difference and overall heat transfer coefficient; (2)Determination of convection heat transfer coefficient inside and outside the tube and the effect of fluid velocity on these; (3)Investigation of the relationship between Nusselt Number. Reynolds Number and Prandtl Number by using multiple non-linear regression analysis. Results are displayed on the screen graphically

  7. Linear algebra

    CERN Document Server

    Edwards, Harold M

    1995-01-01

    In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject

  8. Auto-Scaling of Geo-Based Image Processing in an OpenStack Cloud Computing Environment

    OpenAIRE

    Sanggoo Kang; Kiwon Lee

    2016-01-01

    Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-bas...

  9. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  10. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    1997-11-01

    Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.

  11. A dual computed tomography linear accelerator unit for stereotactic radiation therapy: a new approach without cranially fixated stereotactic frames

    International Nuclear Information System (INIS)

    Uematsu, Minoru; Fukui, Toshiharu; Shioda, Akira; Tokumitsu, Hideyuki; Takai, Kenji; Kojima, Tadaharu; Asai, Yoshiko; Kusano, Shoichi

    1996-01-01

    Purpose: To perform stereotactic radiation therapy (SRT) without cranially fixated stereotactic frames, we developed a dual computed tomography (CT) linear accelerator (linac) treatment unit. Methods and Materials: This unit is composed of a linac, CT, and motorized table. The linac and CT are set up at opposite ends of the table, which is suitable for both machines. The gantry axis of the linac is coaxial with that of the CT scanner. Thus, the center of the target detected with the CT can be matched easily with the gantry axis of the linac by rotating the table. Positioning is confirmed with the CT for each treatment session. Positioning and treatment errors with this unit were examined by phantom studies. Between August and December 1994, 8 patients with 11 lesions of primary or metastatic brain tumors received SRT with this unit. All lesions were treated with 24 Gy in three fractions to 30 Gy in 10 fractions to the 80% isodose line, with or without conventional external beam radiation therapy. Results: Phantom studies revealed that treatment errors with this unit were within 1 mm after careful positioning. The position was easily maintained using two tiny metallic balls as vertical and horizontal marks. Motion of patients was negligible using a conventional heat-flexible head mold and dental impression. The overall time for a multiple noncoplanar arcs treatment for a single isocenter was less than 1 h on the initial treatment day and usually less than 20 min on subsequent days. Treatment was outpatient-based and well tolerated with no acute toxicities. Satisfactory responses have been documented. Conclusion: Using this treatment unit, multiple fractionated SRT is performed easily and precisely without cranially fixated stereotactic frames

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  13. Heuristic algorithms for joint optimization of unicast and anycast traffic in elastic optical network–based large–scale computing systems

    Directory of Open Access Journals (Sweden)

    Markowski Marcin

    2017-09-01

    Full Text Available In recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.

  14. Linear algebra

    CERN Document Server

    Liesen, Jörg

    2015-01-01

    This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

  15. Complex terrain wind resource estimation with the wind-atlas method: Prediction errors using linearized and nonlinear CFD micro-scale models

    DEFF Research Database (Denmark)

    Troen, Ib; Bechmann, Andreas; Kelly, Mark C.

    2014-01-01

    Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...... flow model (CFD) for a number of sites in very complex terrain (large terrain slopes). We first briefly describe the Wind Atlas methodology as implemented in WAsP and the specifics of the “classical” model setup and the new setup allowing the use of the CFD computation engine. We discuss some known...

  16. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    KAUST Repository

    Guo, Yang

    2018-01-04

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  17. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    KAUST Repository

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  18. Computational Fluid Dynamics Study on the Effects of RATO Timing on the Scale Model Acoustic Test

    Science.gov (United States)

    Nielsen, Tanner; Williams, B.; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The SLS lift off configuration consists of four RS-25 liquid thrusters on the core stage, with two solid boosters connected to each side. Past experience with scale model testing at MSFC (in ER42), has shown that there is a delay in the ignition of the Rocket Assisted Take Off (RATO) motor, which is used as the 5% scale analog of the solid boosters, after the signal to ignite is given. This delay can range from 0 to 16.5ms. While this small of a delay maybe insignificant in the case of the full scale SLS, it can significantly alter the data obtained during the SMAT due to the much smaller geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs during full scale. However, the SMAT geometry is much smaller allowing the pressure waves to move down the exhaust duct, through the trench, and impact the vehicle model much faster than occurs at full scale. To better understand the effect of the RATO timing simultaneity on the SMAT IOP test data, a computational fluid dynamics (CFD) analysis was performed using the Loci/CHEM CFD software program. Five different timing offsets, based on RATO ignition delay statistics, were simulated. A variety of results and comparisons will be given, assessing the overall effect of RATO timing simultaneity on the SMAT overpressure environment.

  19. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  20. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  1. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  2. Large Scale Document Inversion using a Multi-threaded Computing System

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  3. Large Scale Document Inversion using a Multi-threaded Computing System.

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  4. Characterization of the Scale Model Acoustic Test Overpressure Environment using Computational Fluid Dynamics

    Science.gov (United States)

    Nielsen, Tanner; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The pressure waves that propagate from the mobile launcher (ML) exhaust hole are defined as the ignition overpressure (IOP), while the portion of the pressure waves that exit the duct or trench are the duct overpressure (DOP). Distinguishing the IOP and DOP in scale model test data has been difficult in past experiences and in early SMAT results, due to the effects of scaling the geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs in full scale. However, the SMAT geometry is twenty times smaller, allowing the pressure waves to move down the exhaust hole, through the trench and duct, and impact the vehicle model much faster than occurs at full scale. The DOP waves impact portions of the vehicle at the same time as the IOP waves, making it difficult to distinguish the different waves and fully understand the data. To better understand the SMAT data, a computational fluid dynamics (CFD) analysis was performed with a fictitious geometry that isolates the IOP and DOP. The upper and lower portions of the domain were segregated to accomplish the isolation in such a way that the flow physics were not significantly altered. The Loci/CHEM CFD software program was used to perform this analysis.

  5. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom.

    Science.gov (United States)

    Moyers, M F

    2014-06-01

    Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote standardization between facilities. Although it

  6. Linear-Algebra Programs

    Science.gov (United States)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  7. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  8. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  9. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  10. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    Energy Technology Data Exchange (ETDEWEB)

    Yadigaroglu, G. [Swiss Federal Institute of Technology-Zurich (ETHZ), Nuclear Engineering Laboratory, ETH-Zentrum, CLT CH-8092 Zurich (Switzerland)]. E-mail: yadi@ethz.ch

    2005-02-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods.

  11. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    International Nuclear Information System (INIS)

    Yadigaroglu, G.

    2005-01-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods

  12. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  13. Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing

    CERN Document Server

    Herr, Werner; McIntosh, E; Schmidt, F

    2006-01-01

    We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.

  14. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  15. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  16. Computational Fluid Dynamics Modeling Of Scaled Hanford Double Shell Tank Mixing - CFD Modeling Sensitivity Study Results

    International Nuclear Information System (INIS)

    Jackson, V.L.

    2011-01-01

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  17. Full-color large-scaled computer-generated holograms using RGB color filters.

    Science.gov (United States)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  18. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation

  19. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    Science.gov (United States)

    Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).

  20. Analysis of a computational benchmark for a high-temperature reactor using SCALE

    International Nuclear Information System (INIS)

    Goluoglu, S.

    2006-01-01

    Several proposed advanced reactor concepts require methods to address effects of double heterogeneity. In doubly heterogeneous systems, heterogeneous fuel particles in a moderator matrix form the fuel region of the fuel element and thus constitute the first level of heterogeneity. Fuel elements themselves are also heterogeneous with fuel and moderator or reflector regions, forming the second level of heterogeneity. The fuel elements may also form regular or irregular lattices. A five-phase computational benchmark for a high-temperature reactor (HTR) fuelled with uranium or reactor-grade plutonium has been defined by the Organization for Economic Cooperation and Development, Nuclear Energy Agency (OECD NEA), Nuclear Science Committee, Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles. This paper summarizes the analysis results using the latest SCALE code system (to be released in CY 2006 as SCALE 5.1). (authors)

  1. Linear DNA vaccine prepared by large-scale PCR provides protective immunity against H1N1 influenza virus infection in mice.

    Science.gov (United States)

    Wang, Fei; Chen, Quanjiao; Li, Shuntang; Zhang, Chenyao; Li, Shanshan; Liu, Min; Mei, Kun; Li, Chunhua; Ma, Lixin; Yu, Xiaolan

    2017-06-01

    Linear DNA vaccines provide effective vaccination. However, their application is limited by high cost and small scale of the conventional polymerase chain reaction (PCR) generally used to obtain sufficient amounts of DNA effective against epidemic diseases. In this study, a two-step, large-scale PCR was established using a low-cost DNA polymerase, RKOD, expressed in Pichia pastoris. Two linear DNA vaccines encoding influenza H1N1 hemagglutinin (HA) 1, LEC-HA, and PTO-LEC-HA (with phosphorothioate-modified primers), were produced by the two-step PCR. Protective effects of the vaccines were evaluated in a mouse model. BALB/c mice were immunized three times with the vaccines or a control DNA fragment. All immunized animals were challenged by intranasal administration of a lethal dose of influenza H1N1 virus 2 weeks after the last immunization. Sera of the immunized animals were tested for the presence of HA-specific antibodies, and the total IFN-γ responses induced by linear DNA vaccines were measured. The results showed that the DNA vaccines but not the control DNA induced strong antibody and IFN-γ responses. Additionally, the PTO-LEC-HA vaccine effectively protected the mice against the lethal homologous mouse-adapted virus, with a survival rate of 100% versus 70% in the LEC-HA-vaccinated group, showing that the PTO-LEC-HA vaccine was more effective than LEC-HA. In conclusion, the results indicated that the linear H1N1 HA-coding DNA vaccines induced significant immune responses and protected mice against a lethal virus challenge. Thus, the low-cost, two-step, large-scale PCR can be considered a potential tool for rapid manufacturing of linear DNA vaccines against emerging infectious diseases. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Nuclear resonant scattering measurements on (57)Fe by multichannel scaling with a 64-pixel silicon avalanche photodiode linear-array detector.

    Science.gov (United States)

    Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M

    2014-11-01

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.

  3. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    Science.gov (United States)

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  4. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    ' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth is dramatic; (5) small HPC clusters located

  5. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    and Networking' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth

  6. Massively parallel and linear-scaling algorithm for second-order Møller–Plesset perturbation theory applied to the study of supramolecular wires

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro

    2017-01-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventiona...

  7. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  8. Computational investigation of large-scale vortex interaction with flexible bodies

    Science.gov (United States)

    Connell, Benjamin; Yue, Dick K. P.

    2003-11-01

    The interaction of large-scale vortices with flexible bodies is examined with particular interest paid to the energy and momentum budgets of the system. Finite difference direct numerical simulation of the Navier-Stokes equations on a moving curvilinear grid is coupled with a finite difference structural solver of both a linear membrane under tension and linear Euler-Bernoulli beam. The hydrodynamics and structural dynamics are solved simultaneously using an iterative procedure with the external structural forcing calculated from the hydrodynamics at the surface and the flow-field velocity boundary condition given by the structural motion. We focus on an investigation into the canonical problem of a vortex-dipole impinging on a flexible membrane. It is discovered that the structural properties of the membrane direct the interaction in terms of the flow evolution and the energy budget. Pressure gradients associated with resonant membrane response are shown to sustain the oscillatory motion of the vortex pair. Understanding how the key mechanisms in vortex-body interactions are guided by the structural properties of the body is a prerequisite to exploiting these mechanisms.

  9. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  10. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  11. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  12. A linear programming manual

    Science.gov (United States)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  13. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: IV. Generalized matrix analysis of linear compartment systems.

    Science.gov (United States)

    Langenbucher, Frieder

    2005-01-01

    A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.

  14. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  15. The Abridgment and Relaxation Time for a Linear Multi-Scale Model Based on Multiple Site Phosphorylation.

    Directory of Open Access Journals (Sweden)

    Shuo Wang

    Full Text Available Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie's stochastic simulation algorithm (SSA. Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role.

  16. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  17. Cross-scale Efficient Tensor Contractions for Coupled Cluster Computations Through Multiple Programming Model Backends

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry

    2016-07-26

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.

  18. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  19. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  20. Computational psychotherapy research: scaling up the evaluation of patient-provider interactions.

    Science.gov (United States)

    Imel, Zac E; Steyvers, Mark; Atkins, David C

    2015-03-01

    In psychotherapy, the patient-provider interaction contains the treatment's active ingredients. However, the technology for analyzing the content of this interaction has not fundamentally changed in decades, limiting both the scale and specificity of psychotherapy research. New methods are required to "scale up" to larger evaluation tasks and "drill down" into the raw linguistic data of patient-therapist interactions. In the current article, we demonstrate the utility of statistical text analysis models called topic models for discovering the underlying linguistic structure in psychotherapy. Topic models identify semantic themes (or topics) in a collection of documents (here, transcripts). We used topic models to summarize and visualize 1,553 psychotherapy and drug therapy (i.e., medication management) transcripts. Results showed that topic models identified clinically relevant content, including affective, relational, and intervention related topics. In addition, topic models learned to identify specific types of therapist statements associated with treatment-related codes (e.g., different treatment approaches, patient-therapist discussions about the therapeutic relationship). Visualizations of semantic similarity across sessions indicate that topic models identify content that discriminates between broad classes of therapy (e.g., cognitive-behavioral therapy vs. psychodynamic therapy). Finally, predictive modeling demonstrated that topic model-derived features can classify therapy type with a high degree of accuracy. Computational psychotherapy research has the potential to scale up the study of psychotherapy to thousands of sessions at a time. We conclude by discussing the implications of computational methods such as topic models for the future of psychotherapy research and practice. (PsycINFO Database Record (c) 2015 APA, all rights reserved).