WorldWideScience

Sample records for large-scale parallel nonlinear

  1. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  2. Large scale petroleum reservoir simulation and parallel preconditioning algorithms research

    Institute of Scientific and Technical Information of China (English)

    SUN Jiachang; CAO Jianwen

    2004-01-01

    Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear solving algorithm, domain decomposition and parallel computation. We first discuss some parallel preconditioning techniques, and then construct an integrated preconditioner, which is based on large scale distributed parallel processing, and reservoir simulation-oriented. The infrastructure of this preconditioner contains such famous preconditioning construction techniques as coarse grid correction, constraint residual correction and subspace projection correction. We essentially use multi-step means to integrate totally eight types of preconditioning components in order to give out the final preconditioner. Million-grid cell scale industrial reservoir data were tested on native high performance computers. Numerical statistics and analyses show that this preconditioner achieves satisfying parallel efficiency and acceleration effect.

  3. Quantized pressure control in large-scale nonlinear hydraulic networks

    NARCIS (Netherlands)

    Persis, Claudio De; Kallesøe, Carsten Skovmose; Jensen, Tom Nørgaard

    2010-01-01

    It was shown previously that semi-global practical pressure regulation at designated points of a large-scale nonlinear hydraulic network is guaranteed by distributed proportional controllers. For a correct implementation of the control laws, each controller, which is located at these designated poin

  4. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  5. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  6. Imprint of non-linear effects on HI intensity mapping on large scales

    CERN Document Server

    Umeh, Obinna

    2016-01-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We consider how non-linear effects associated with the HI bias and redshift space distortions contribute to the clustering of cosmic neutral Hydrogen on large scales. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result to show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortions leads to about 10\\% modulation of the HI power spectrum on large scales.

  7. Imprint of non-linear effects on HI intensity mapping on large scales

    Science.gov (United States)

    Umeh, Obinna

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  8. Quantum noise in large-scale coherent nonlinear photonic circuits

    CERN Document Server

    Santori, Charles; Beausoleil, Raymond G; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-01-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A netlist-based circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasi-probability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total, and functions as a 4-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important...

  9. Parallel cluster labeling for large-scale Monte Carlo simulations

    CERN Document Server

    Flanigan, M; Flanigan, M; Tamayo, P

    1995-01-01

    We present an optimized version of a cluster labeling algorithm previously introduced by the authors. This algorithm is well suited for large-scale Monte Carlo simulations of spin models using cluster dynamics on parallel computers with large numbers of processors. The algorithm divides physical space into rectangular cells which are assigned to processors and combines a serial local labeling procedure with a relaxation process across nearest-neighbor processors. By controlling overhead and reducing inter-processor communication this method attains good computational speed-up and efficiency. Large systems of up to 65536 X 65536 spins have been simulated at updating speeds of 11 nanosecs/site (90.7 million spin updates/sec) using state-of-the-art supercomputers. In the second part of the article we use the cluster algorithm to study the relaxation of magnetization and energy on large Ising models using Swendsen-Wang dynamics. We found evidence that exponential and power law factors are present in the relaxatio...

  10. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  11. Nonlinear evolution of large-scale structure in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-08-15

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r/sub 0/ = 5.1; its expected value in a neutrino dominated universe is 4(..cap omega..h)/sup -1/ (H/sub 0/ = 100h km s/sup -1/ Mpc/sup -1/). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Ly..cap alpha.. absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with ..cap omega..<1.

  12. Operational Optimization of Large-Scale Parallel-Unit SWRO Desalination Plant Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2014-01-01

    Full Text Available A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality.

  13. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  14. Nonlinear density fluctuation field theory for large scale structure

    Institute of Scientific and Technical Information of China (English)

    Yang Zhang; Hai-Xing Miao

    2009-01-01

    We develop an effective field theory of density fluctuations for a Newtonian self-gravitating N-body system in quasi-equilibrium and apply it to a homogeneous uni-verse with small density fluctuations. Keeping the density fluctuations up to second or-der, we obtain the nonlinear field equation of 2-pt correlation ξ(r), which contains 3-pt correlation and formal ultra-violet divergences. By the Groth-Peebles hierarchical ansatz and mass renormalization, the equation becomes closed with two new terms beyond the Gaussian approximation, and their coefficients are taken as parameters. The analytic solu-tion is obtained in terms of the hypergeometric functions, which is checked numerically.With one single set of two fixed parameters, the correlation ξ(r) and the corresponding power spectrum P(k) simultaneously match the results from all the major surveys, such as APM, SDSS, 2dfGRS, and REFLEX. The model gives a unifying understanding of several seemingly unrelated features of large scale structure from a field-theoretical per-spective. The theory is worth extending to study the evolution effects in an expanding universe.

  15. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  16. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  17. Final Report: Migration Mechanisms for Large-scale Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Jason Nieh

    2009-10-30

    Process migration is the ability to transfer a process from one machine to another. It is a useful facility in distributed computing environments, especially as computing devices become more pervasive and Internet access becomes more ubiquitous. The potential benefits of process migration, among others, are fault resilience by migrating processes off of faulty hosts, data access locality by migrating processes closer to the data, better system response time by migrating processes closer to users, dynamic load balancing by migrating processes to less loaded hosts, and improved service availability and administration by migrating processes before host maintenance so that applications can continue to run with minimal downtime. Although process migration provides substantial potential benefits and many approaches have been considered, achieving transparent process migration functionality has been difficult in practice. To address this problem, our work has designed, implemented, and evaluated new and powerful transparent process checkpoint-restart and migration mechanisms for desktop, server, and parallel applications that operate across heterogeneous cluster and mobile computing environments. A key aspect of this work has been to introduce lightweight operating system virtualization to provide processes with private, virtual namespaces that decouple and isolate processes from dependencies on the host operating system instance. This decoupling enables processes to be transparently checkpointed and migrated without modifying, recompiling, or relinking applications or the operating system. Building on this lightweight operating system virtualization approach, we have developed novel technologies that enable (1) coordinated, consistent checkpoint-restart and migration of multiple processes, (2) fast checkpointing of process and file system state to enable restart of multiple parallel execution environments and time travel, (3) process migration across heterogeneous

  18. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  19. Development of parallel mathematical subroutine library and large scale numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Futoshi [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-03-01

    In recent years, parallel computers, namely parallel supercomputers and workstation clusters, come into use for large scale numerical simulations in the field of computational science. At present, since the parallel programming is difficult compared to using serial computers, development of efficient program in parallel computers can be easily achieved by incorporating the parallelized numerical subroutines. In Japan Atomic Energy Research Institute (JAERI), portable mathematical subroutine library using MPI (Message Passing Interface) or PVM (Parallel Virtual Machine) is being developed for distributed memory parallel computers. In this report, we present the overview of the parallel library and its application to the parallelization for tight-binding molecular dynamics. (author)

  20. Parallel Dynamic Analysis of a Large-Scale Water Conveyance Tunnel under Seismic Excitation Using ALE Finite-Element Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wang

    2016-01-01

    Full Text Available Parallel analyses about the dynamic responses of a large-scale water conveyance tunnel under seismic excitation are presented in this paper. A full three-dimensional numerical model considering the water-tunnel-soil coupling is established and adopted to investigate the tunnel’s dynamic responses. The movement and sloshing of the internal water are simulated using the multi-material Arbitrary Lagrangian Eulerian (ALE method. Nonlinear fluid–structure interaction (FSI between tunnel and inner water is treated by using the penalty method. Nonlinear soil-structure interaction (SSI between soil and tunnel is dealt with by using the surface to surface contact algorithm. To overcome computing power limitations and to deal with such a large-scale calculation, a parallel algorithm based on the modified recursive coordinate bisection (MRCB considering the balance of SSI and FSI loads is proposed and used. The whole simulation is accomplished on Dawning 5000 A using the proposed MRCB based parallel algorithm optimized to run on supercomputers. The simulation model and the proposed approaches are validated by comparison with the added mass method. Dynamic responses of the tunnel are analyzed and the parallelism is discussed. Besides, factors affecting the dynamic responses are investigated. Better speedup and parallel efficiency show the scalability of the parallel method and the analysis results can be used to aid in the design of water conveyance tunnels.

  1. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  2. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    Science.gov (United States)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  3. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  4. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  5. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  6. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  7. Implementation of Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Software

    Science.gov (United States)

    2015-08-01

    Atomic /Molecular Massively Parallel Simulator (LAMMPS) Software by N Scott Weingarten and James P Larentzos Approved for...0687 ● AUG 2015 US Army Research Laboratory Implementation of Shifted Periodic Boundary Conditions in the Large-Scale Atomic /Molecular...Shifted Periodic Boundary Conditions in the Large-Scale Atomic /Molecular Massively Parallel Simulator (LAMMPS) Software 5a. CONTRACT NUMBER 5b

  8. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    CERN Document Server

    Fonseca, Ricardo A; Fiúza, Frederico; Davidson, Asher; Tsung, Frank S; Mori, Warren B; Silva, Luís O

    2013-01-01

    A new generation of laser wakefield accelerators, supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modeling for further understanding of the underlying physics and identification of optimal regimes, but large scale modeling of these scenarios is computationally heavy and requires efficient use of state-of-the-art Petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed / shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modeling of LWFA, demonstrating speedups of over 1 order of magni...

  9. GLOBAL CONVERGENCE AND IMPLEMENTATION OF NGTN METHOD FOR SOLVING LARGE-SCALE SPARSE NONLINEAR PROGRAMMING PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Qin Ni

    2001-01-01

    An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and possesses Q-quadratic convergence rate. The global convergence of this new method is proved,the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.

  10. A Reduced Basis Framework: Application to large scale non-linear multi-physics problems

    Directory of Open Access Journals (Sweden)

    Daversin C.

    2013-12-01

    Full Text Available In this paper we present applications of the reduced basis method (RBM to large-scale non-linear multi-physics problems. We first describe the mathematical framework in place and in particular the Empirical Interpolation Method (EIM to recover an affine decomposition and then we propose an implementation using the open-source library Feel++ which provides both the reduced basis and finite element layers. Large scale numerical examples are shown and are connected to real industrial applications arising from the High Field Resistive Magnets development at the Laboratoire National des Champs Magnétiques Intenses.

  11. The Role of Erupting Sigmoid in Triggering a Flare with Parallel and Large-Scale Quasi-Circular Ribbons

    CERN Document Server

    Joshi, Navin Chandra; Sun, Xudong; Wang, Haimin; Magara, Tetsuya; Moon, Y -J

    2015-01-01

    In this paper, we present observations and analysis of an interesting sigmoid formation, eruption and the associated flare that occurred on 2014 April 18 using multi-wavelength data sets. We discuss the possible role of the sigmoid eruption in triggering the flare, which consists of two different set of ribbons: parallel ribbons as well as a large-scale quasi-circular ribbon. Several observational evidence and nonlinear force-free field extrapolation results show the existence of a large-scale fan-spine type magnetic configuration with a sigmoid lying under a section of the fan dome. The event can be explained with the following two phases. During the pre-flare phase, we observed the formation and appearance of sigmoid via tether-cutting reconnection between the two sets of sheared fields under the fan dome. The second, main flare phase, features the eruption of the sigmoid, the subsequent flare with parallel ribbons, and a quasi-circular ribbon. We propose the following multi-stage successive reconnections s...

  12. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems.

    Science.gov (United States)

    Teodoro, George; Kurc, Tahsin M; Pan, Tony; Cooper, Lee A D; Kong, Jun; Widener, Patrick; Saltz, Joel H

    2012-05-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches.

  13. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  14. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    CERN Document Server

    Chen, Weiliang

    2016-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of simulated models and morphologies have exceeded the capacity of any serial implementation. This led to development of parallel solutions that benefit from the boost in performance of modern large-scale supercomputers. In this paper, we describe an MPI-based, parallel Operator-Splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its usage in real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecul...

  15. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Ben-Shan; Bai, Zhaojun; /UC, Davis; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  16. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  17. A Nonlinear Small-Gain Theorem for Large-Scale Time Delay Systems

    CERN Document Server

    Tiwari, Shanaz; Jiang, Zhong-Ping

    2009-01-01

    This paper extends the nonlinear ISS small-gain theorem to a large-scale time delay system composed of three or more subsystems. En route to proving this small-gain theorem for systems of differential equations with delays, a small-gain theorem for operators is examined. The result developed for operators allows applications to a wide class of systems, including state space systems with delays.

  18. STABILIZATION FOR A CLASS OF LARGE-SCALE STOCHASTIC NONLINEAR SYSTEMS WITH DECENTRALIZED CONTROLLER DESIGN

    Institute of Scientific and Technical Information of China (English)

    Xiaowu MU; Haijun LIU

    2007-01-01

    In this paper,a state feedback adaptive stabilization for a class of large-scale stochastic nonlinear systems is designed with Lyapunov and Backstepping method.In the systems there are uncertain terms,whose bounds are governed by a set of unknown parameters.The designed controllers would make the close-loop systems asymptotically stable and adaptive for the unknown parameters.As an application,a second order example is delivered to illustrate the approach.

  19. Parallelization of a beam dynamics code and first large scale radio frequency quadrupole simulations

    Directory of Open Access Journals (Sweden)

    J. Xu

    2007-01-01

    Full Text Available The design and operation support of hadron (proton and heavy-ion linear accelerators require substantial use of beam dynamics simulation tools. The beam dynamics code TRACK has been originally developed at Argonne National Laboratory (ANL to fulfill the special requirements of the rare isotope accelerator (RIA accelerator systems. From the beginning, the code has been developed to make it useful in the three stages of a linear accelerator project, namely, the design, commissioning, and operation of the machine. To realize this concept, the code has unique features such as end-to-end simulations from the ion source to the final beam destination and automatic procedures for tuning of a multiple charge state heavy-ion beam. The TRACK code has become a general beam dynamics code for hadron linacs and has found wide applications worldwide. Until recently, the code has remained serial except for a simple parallelization used for the simulation of multiple seeds to study the machine errors. To speed up computation, the TRACK Poisson solver has been parallelized. This paper discusses different parallel models for solving the Poisson equation with the primary goal to extend the scalability of the code onto 1024 and more processors of the new generation of supercomputers known as BlueGene (BG/L. Domain decomposition techniques have been adapted and incorporated into the parallel version of the TRACK code. To demonstrate the new capabilities of the parallelized TRACK code, the dynamics of a 45 mA proton beam represented by 10^{8} particles has been simulated through the 325 MHz radio frequency quadrupole and initial accelerator section of the proposed FNAL proton driver. The results show the benefits and advantages of large-scale parallel computing in beam dynamics simulations.

  20. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  1. On the importance of nonlinear couplings in large-scale neutrino streams

    CERN Document Server

    Dupuy, Hélène

    2015-01-01

    We propose a procedure to evaluate the impact of nonlinear couplings on the evolution of massive neutrino streams in the context of large-scale structure growth. Such streams can be described by general nonlinear conservation equations, derived from a multiple-flow perspective, which generalize the conservation equations of non-relativistic pressureless fluids. The relevance of the nonlinear couplings is quantified with the help of the eikonal approximation applied to the subhorizon limit of this system. It highlights the role played by the relative displacements of different cosmic streams and it specifies, for each flow, the spatial scales at which the growth of structure is affected by nonlinear couplings. We found that, at redshift zero, such couplings can be significant for wavenumbers as small as $k=0.2\\,h$/Mpc for most of the neutrino streams.

  2. Towards a self-consistent halo model for the nonlinear large-scale structure

    CERN Document Server

    Schmidt, Fabian

    2015-01-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...

  3. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    Science.gov (United States)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  4. Adaptive fuzzy decentralised control for stochastic nonlinear large-scale systems in pure-feedback form

    Science.gov (United States)

    Tong, Shaocheng; Xu, Yinyin; Li, Yongming

    2015-06-01

    This paper is concerned with the problem of adaptive fuzzy decentralised output-feedback control for a class of uncertain stochastic nonlinear pure-feedback large-scale systems with completely unknown functions, the mismatched interconnections and without requiring the states being available for controller design. With the help of fuzzy logic systems approximating the unknown nonlinear functions, a fuzzy state observer is designed estimating the unmeasured states. Therefore, the nonlinear filtered signals are incorporated into the backstepping recursive design, and an adaptive fuzzy decentralised output-feedback control scheme is developed. It is proved that the filter system converges to a small neighbourhood of the origin based on appropriate choice of the design parameters. Simulation studies are included illustrating the effectiveness of the proposed approach.

  5. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    Science.gov (United States)

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  6. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  7. DGDFT: A Massively Parallel Method for Large Scale Density Functional Theory Calculations

    CERN Document Server

    Hu, Wei; Yang, Chao

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) [J. Comput. Phys. 2012, 231, 2140] method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field (SCF) iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. It minimizes the number of degrees of freedom required to represent the solution to the Kohn-Sham problem for a desired level of accuracy. In particular, DGDFT can reach the planewave accuracy with far fewer numbers of degrees of freedom. By using the pole expansion and selected inversion (PEXSI) technique to compute electron density, energy and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both i...

  8. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  9. Analysis of Lightning Electromagnetic Field on Large-scale Terrain Model using Three-dimensional MW-FDTD Parallel Computation

    Science.gov (United States)

    Oikawa, Takaaki; Sonoda, Jun; Sato, Motoyuki; Honma, Noriyasu; Ikegawa, Yutaka

    Analysis of lightning electromagnetic field using the FDTD method have been studied in recent year. However, large-scale three-dimensional analysis on real environment have not been considered, because the FDTD method has huge computational cost on large-scale analysis. So we have proposed a three-dimensional moving window FDTD (MW-FDTD) method with parallel computation. Our method use few computational cost than the conventional FDTD method and the original MW-FDTD method. In this paper, we have studied about computation performance of MW-FDTD parallel computation and large-scale three-dimensional analysis of lightning electromagnetic field on a real terrain model using our MW-FDTD with parallel computation.

  10. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data

    Science.gov (United States)

    Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods. PMID:28166542

  11. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    Science.gov (United States)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  12. The missing link: a nonlinear post-Friedmann framework for small and large scales

    CERN Document Server

    Milillo, Irene; Bruni, Marco; Maselli, Andrea

    2015-01-01

    We present a nonlinear post-Friedmann framework for structure formation, generalizing to cosmology the weak-field (post-Minkowskian) approximation, unifying the treatment of small and large scales. We consider a universe filled with a pressureless fluid and a cosmological constant $\\Lambda$, the theory of gravity is Einstein's general relativity and the background is the standard flat $\\Lambda$CDM cosmological model. We expand the metric and the energy-momentum tensor in powers of $1/c$, keeping the matter density and peculiar velocity as exact fundamental variables. We assume the Poisson gauge, including scalar and tensor modes up to $1/c^4$ order and vector modes up to $1/c^5$ terms. Through a redefinition of the scalar potentials as a resummation of the metric contributions at different orders, we obtain a complete set of nonlinear equations, providing a unified framework to study structure formation from small to superhorizon scales, from the nonlinear Newtonian to the linear relativistic regime. We expli...

  13. Large-scale weakly nonlinear perturbations of convective magnetic dynamos in a rotating layer

    CERN Document Server

    Chertovskih, Roman

    2015-01-01

    We present a new mechanism for generation of large-scale magnetic field by thermal convection which does not involve the alpha-effect. We consider weakly nonlinear perturbations of space-periodic steady convective magnetic dynamos in a rotating layer that were identified in our previous work. The perturbations have a spatial scale in the horizontal direction that is much larger than the period of the perturbed convective magnetohydrodynamic state. Following the formalism of the multiscale stability theory, we have derived the system of amplitude equations governing the evolution of the leading terms in expansion of the perturbations in power series in the scale ratio. This asymptotic analysis is more involved than in the cases considered earlier, because the kernel of the operator of linearisation has zero-mean neutral modes whose origin lies in the spatial invariance of the perturbed regime, the operator reduced on the generalised kernel has two Jordan normal form blocks of size two, and simplifying symmetri...

  14. Non-linear shrinkage estimation of large-scale structure covariance

    Science.gov (United States)

    Joachimi, Benjamin

    2017-03-01

    In many astrophysical settings, covariance matrices of large data sets have to be determined empirically from a finite number of mock realizations. The resulting noise degrades inference and precludes it completely if there are fewer realizations than data points. This work applies a recently proposed non-linear shrinkage estimator of covariance to a realistic example from large-scale structure cosmology. After optimizing its performance for the usage in likelihood expressions, the shrinkage estimator yields subdominant bias and variance comparable to that of the standard estimator with a factor of ∼50 less realizations. This is achieved without any prior information on the properties of the data or the structure of the covariance matrix, at a negligible computational cost.

  15. Efficient graph-based dynamic load-balancing for parallel large-scale agent-based traffic simulation

    NARCIS (Netherlands)

    Xu, Y.; Cai, W.; Aydt, H.; Lees, M.; Tolk, A.; Diallo, S.Y.; Ryzhov, I.O.; Yilmaz, L.; Buckley, S.; Miller, J.A.

    2014-01-01

    One of the issues of parallelizing large-scale agent-based traffic simulations is partitioning and load-balancing. Traffic simulations are dynamic applications where the distribution of workload in the spatial domain constantly changes. Dynamic load-balancing at run-time has shown better efficiency

  16. On large-scale nonlinear programming techniques for solving optimal control problems

    Energy Technology Data Exchange (ETDEWEB)

    Faco, J.L.D.

    1994-12-31

    The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.

  17. Large-Scale Structure Formation: from the first non-linear objects to massive galaxy clusters

    CERN Document Server

    Planelles, S; Bykov, A M

    2014-01-01

    The large-scale structure of the Universe formed from initially small perturbations in the cosmic density field, leading to galaxy clusters with up to 10^15 Msun at the present day. Here, we review the formation of structures in the Universe, considering the first primordial galaxies and the most massive galaxy clusters as extreme cases of structure formation where fundamental processes such as gravity, turbulence, cooling and feedback are particularly relevant. The first non-linear objects in the Universe formed in dark matter halos with 10^5-10^8 Msun at redshifts 10-30, leading to the first stars and massive black holes. At later stages, larger scales became non-linear, leading to the formation of galaxy clusters, the most massive objects in the Universe. We describe here their formation via gravitational processes, including the self-similar scaling relations, as well as the observed deviations from such self-similarity and the related non-gravitational physics (cooling, stellar feedback, AGN). While on i...

  18. Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.

    Science.gov (United States)

    Long, Lijun; Zhao, Jun

    2016-03-08

    In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple ``explosion of complexity.'' Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.

  19. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    Science.gov (United States)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  20. Drive of parallel flows by turbulence and large-scale E × B transverse transport in divertor geometry

    Science.gov (United States)

    Galassi, D.; Tamain, P.; Bufferand, H.; Ciraolo, G.; Ghendrih, Ph.; Baudoin, C.; Colin, C.; Fedorczak, N.; Nace, N.; Serre, E.

    2017-03-01

    The poloidal asymmetries of parallel flows in edge plasmas are investigated by the 3D fluid turbulence code TOKAM3X. A diverted COMPASS-like magnetic equilibrium is used for the simulations. The measurements and simulations of parallel Mach numbers are compared, and exhibit good qualitative agreement. Small-scale turbulent transport is observed to dominate near the low field side midplane, even though it co-exists with significant large-scale cross-field fluxes. Despite the turbulent nature of the plasma in the divertor region, simulations show the low effectiveness of turbulence for the cross-field transport towards the private flux region. Nevertheless, a complex pattern of fluxes associated with the average field components are found to cross the separatrix in the divertor region. Large-scale and small-scale turbulent E× B transport, along with the \

  1. HPF: a data parallel programming interface for large-scale numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Yoshiki; Suehiro, Kenji; Murai, Hitoshi [NEC Corp., Tokyo (Japan)

    1998-03-01

    HPF (High Performance Fortran) is a data parallel language designed for programming on distributed memory parallel systems. The first draft of HPF1.0 was defined in 1993 as a de facto standard language. Recently, relatively reliable HPF compilers have become available on several distributed memory parallel systems. Many projects to parallelize real world programs have started mainly in the U.S. and Europe, and the weak and strong points in the current HPF have been made clear. In this paper, major data transfer patterns required to parallelize numerical simulations, such as SHIFT, matrix transposition, reduction, GATHER/SCATTER and irregular communication, and the programming methods to implement them with HPF are described. The problems in the current HPF interface for developing efficient parallel programs and recent activities to deal with them is presented as well. (author)

  2. Performance Measurement and Analysis of Large-Scale Parallel Applications on Leadership Computing Systems

    Directory of Open Access Journals (Sweden)

    Brian J.N. Wylie

    2008-01-01

    Full Text Available Developers of applications with large-scale computing requirements are currently presented with a variety of high-performance systems optimised for message-passing, however, effectively exploiting the available computing resources remains a major challenge. In addition to fundamental application scalability characteristics, application and system peculiarities often only manifest at extreme scales, requiring highly scalable performance measurement and analysis tools that are convenient to incorporate in application development and tuning activities. We present our experiences with a multigrid solver benchmark and state-of-the-art real-world applications for numerical weather prediction and computational fluid dynamics, on three quite different multi-thousand-processor supercomputer systems – Cray XT3/4, MareNostrum & Blue Gene/L – using the newly-developed SCALASCA toolset to quantify and isolate a range of significant performance issues.

  3. A large-scale nonlinear eigensolver for the analysis of dispersive nanostructures

    Science.gov (United States)

    Guo, Hua; Arbenz, Peter; Oswald, Benedikt

    2013-08-01

    We introduce the electromagnetic eigenmodal solver code FemaxxNano for the numerical analysis of nanometer structured optical systems, a scientific field generally know as nanooptics. FemaxxNano solves the electric field vector wave equation and calculates the electromagnetic eigenmodes of nearly arbitrary 3-dimensional resonators, embedded either in free-space, vacuum or a background medium. Here, the study of the interaction between nanometer sized metallic structures and light is at the heart of the physical problem. Since metals in the optical region of the electromagnetic spectrum are highly dispersive and, thus, dissipative, dielectric media, we eventually obtain a nonlinear eigenvalue problem. We discretize the electromagnetic eigenvalue problem with the finite element method (FEM) in 3-dimensional space and on unstructured tetrahedral grids. We introduce a fully iterative scheme to solve the nonlinear problem for complex coefficient matrices that depend on wavelength. We investigate the properties of the algorithm in detail and demonstrate its performance by analyzing a nanometer sized optical dimer structure, a specific type of optical antenna, on distributed-memory parallel computers.

  4. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  5. Developing a Massively Parallel Forward Projection Radiography Model for Large-Scale Industrial Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bauerle, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This project utilizes Graphics Processing Units (GPUs) to compute radiograph simulations for arbitrary objects. The generation of radiographs, also known as the forward projection imaging model, is computationally intensive and not widely utilized. The goal of this research is to develop a massively parallel algorithm that can compute forward projections for objects with a trillion voxels (3D pixels). To achieve this end, the data are divided into blocks that can each t into GPU memory. The forward projected image is also divided into segments to allow for future parallelization and to avoid needless computations.

  6. Solving Large-Scale QAP Problems in Parallel with the Search

    DEFF Research Database (Denmark)

    Clausen, Jens; Brüngger, A.; Marzetta, A.

    1998-01-01

    Program libraries are one tool to make the cooperation between specialists from various fields successful: the separation of application-specific knowledge from application-independent tasks ensures portability, maintenance, extensibility, and flexibility. The current paper demonstrates the success...... in combining problem-specific knowledge for the quadratic assignment problem (QAP) with the raw computing power offered by contemporary parallel hardware by using the library of parallel search algorithms ZRAM. Solutions of previously unsolved large standard test-instances of the QAP are presented....

  7. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    Science.gov (United States)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  8. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    OpenAIRE

    Yang Liu; Jie Yang; Yuan Huang; Lixiong Xu; Siguang Li; Man Qi

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing mo...

  9. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel

    2013-10-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  10. Parallel Processing for Large-scale Fault Tree in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xinyan Wang

    2013-05-01

    Full Text Available Wireless sensor networks (WSN covers many kinds of technologies, such as technology of sensor, embedded system, wireless communication, etc. WSN is different from the traditional networks in size, communication distance and energy-constrained so as to develop new topology, protocol, quality of service (QoS, and so on. In order to solve the problem of self-organizing in the topology, this paper proposes a novel strategy which is based on communication delay between sensors. Firstly, the gateway selects some boundary nodes to connect. Secondly, the boundary nodes choose inner nodes. The rest may be deduced by analogy. Finally, a net-tree topology with multi-path routing is developed. The analyses of the topology show that net-tree has strong ability in self-organizing and extensible. However, the scale of system is usually very large and complexity so that it is hard to detect the failure nodes when the nodes fail. To solve the greater challenge, the paper proposes to adopt fault tree analysis. Fault tree is a commonly used method to analyze the reliability of a network or system. Based on the fault tree analysis, a parallel computing algorithm is represented to these faults in the net-tree. Firstly, two models for parallel processing are came up and we focus on the parallel processing algorithm based on the cut sets. Then, the speedup ratio is studied. Compare with the serial algorithm, the results of the experiment shows that the efficiency has been greatly improved.

  11. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    Science.gov (United States)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  12. A Review on Large Scale Graph Processing Using Big Data Based Parallel Programming Models

    Directory of Open Access Journals (Sweden)

    Anuraj Mohan

    2017-02-01

    Full Text Available Processing big graphs has become an increasingly essential activity in various fields like engineering, business intelligence and computer science. Social networks and search engines usually generate large graphs which demands sophisticated techniques for social network analysis and web structure mining. Latest trends in graph processing tend towards using Big Data platforms for parallel graph analytics. MapReduce has emerged as a Big Data based programming model for the processing of massively large datasets. Apache Giraph, an open source implementation of Google Pregel which is based on Bulk Synchronous Parallel Model (BSP is used for graph analytics in social networks like Facebook. This proposed work is to investigate the algorithmic effects of the MapReduce and BSP model on graph problems. The triangle counting problem in graphs is considered as a benchmark and evaluations are made on the basis of time of computation on the same cluster, scalability in relation to graph and cluster size, resource utilization and the structure of the graph.

  13. Large-Scale Eigenvalue Calculations for Stability Analysis of Steady Flows on Massively Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Lehoucq, Richard B.; Salinger, Andrew G.

    1999-08-01

    We present an approach for determining the linear stability of steady states of PDEs on massively parallel computers. Linearizing the transient behavior around a steady state leads to a generalized eigenvalue problem. The eigenvalues with largest real part are calculated using Arnoldi's iteration driven by a novel implementation of the Cayley transformation to recast the problem as an ordinary eigenvalue problem. The Cayley transformation requires the solution of a linear system at each Arnoldi iteration, which must be done iteratively for the algorithm to scale with problem size. A representative model problem of 3D incompressible flow and heat transfer in a rotating disk reactor is used to analyze the effect of algorithmic parameters on the performance of the eigenvalue algorithm. Successful calculations of leading eigenvalues for matrix systems of order up to 4 million were performed, identifying the critical Grashof number for a Hopf bifurcation.

  14. An extensible operating system design for large-scale parallel machines.

    Energy Technology Data Exchange (ETDEWEB)

    Riesen, Rolf E.; Ferreira, Kurt Brian

    2009-04-01

    Running untrusted user-level code inside an operating system kernel has been studied in the 1990's but has not really caught on. We believe the time has come to resurrect kernel extensions for operating systems that run on highly-parallel clusters and supercomputers. The reason is that the usage model for these machines differs significantly from a desktop machine or a server. In addition, vendors are starting to add features, such as floating-point accelerators, multicore processors, and reconfigurable compute elements. An operating system for such machines must be adaptable to the requirements of specific applications and provide abstractions to access next-generation hardware features, without sacrificing performance or scalability.

  15. Large-Scale Targeted Proteomics Using Internal Standard Triggered-Parallel Reaction Monitoring (IS-PRM).

    Science.gov (United States)

    Gallien, Sebastien; Kim, Sang Yoon; Domon, Bruno

    2015-06-01

    Targeted high-resolution and accurate mass analyses performed on fast sequencing mass spectrometers have opened new avenues for quantitative proteomics. More specifically, parallel reaction monitoring (PRM) implemented on quadrupole-orbitrap instruments exhibits exquisite selectivity to discriminate interferences from analytes. Furthermore, the instrument trapping capability enhances the sensitivity of the measurements. The PRM technique, applied to the analysis of limited peptide sets (typically 50 peptides or less) in a complex matrix, resulted in an improved detection and quantification performance as compared with the reference method of selected reaction monitoring performed on triple quadrupole instruments. However, the implementation of PRM for the analysis of large peptide numbers requires the adjustment of mass spectrometry acquisition parameters, which affects dramatically the quality of the generated data, and thus the overall output of an experiment. A newly designed data acquisition scheme enabled the analysis of moderate-to-large peptide numbers while retaining a high performance level. This new method, called internal standard triggered-parallel reaction monitoring (IS-PRM), relies on added internal standards and the on-the-fly adjustment of acquisition parameters to drive in real-time measurement of endogenous peptides. The acquisition time management was designed to maximize the effective time devoted to measure the analytes in a time-scheduled targeted experiment. The data acquisition scheme alternates between two PRM modes: a fast low-resolution "watch mode" and a "quantitative mode" using optimized parameters ensuring data quality. The IS-PRM method exhibited a highly effective use of the instrument time. Applied to the analysis of large peptide sets (up to 600) in complex samples, the method showed an unprecedented combination of scale and analytical performance, with limits of quantification in the low amol range. The successful analysis of

  16. fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.

    Science.gov (United States)

    Hung, Ling-Hong; Samudrala, Ram

    2014-06-15

    fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.

  17. Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems

    KAUST Repository

    Frohne, Jörg

    2015-08-06

    © 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.

  18. Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading.

    Directory of Open Access Journals (Sweden)

    Julián A García-Grajales

    Full Text Available With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon

  19. Neurite, a finite difference large scale parallel program for the simulation of electrical signal propagation in neurites under mechanical loading.

    Science.gov (United States)

    García-Grajales, Julián A; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine

    2015-01-01

    With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite--explicit and implicit--were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented

  20. Development and application of a massively parallel KKR Green function method for large scale systems

    Energy Technology Data Exchange (ETDEWEB)

    Thiess, Alexander R.

    2011-12-19

    In this thesis we present the development of the self-consistent, full-potential Korringa-Kohn-Rostoker (KKR) Green function method KKRnano for calculating the electronic properties, magnetic interactions, and total energy including all electrons on the basis of the density functional theory (DFT) on high-end massively parallelized high-performance computers for supercells containing thousands of atoms without sacrifice of accuracy. KKRnano was used for the following two applications. The first application is centered in the field of dilute magnetic semiconductors. In this field a new promising material combination was identified: gadolinium doped gallium nitride which shows ferromagnetic ordering of colossal magnetic moments above room temperature. It quickly turned out that additional extrinsic defects are inducing the striking properties. However, the question which kind of extrinsic defects are present in experimental samples is still unresolved. In order to shed light on this open question, we perform extensive studies of the most promising candidates: interstitial nitrogen and oxygen, as well as gallium vacancies. By analyzing the pairwise magnetic coupling among defects it is shown that nitrogen and oxygen interstitials cannot support thermally stable ferromagnetic order. Gallium vacancies, on the other hand, facilitate an important coupling mechanism. The vacancies are found to induce large magnetic moments on all surrounding nitrogen sites, which then couple ferromagnetically both among themselves and with the gadolinium dopants. Based on a statistical evaluation it can be concluded that already small concentrations of gallium vacancies can lead to a distinct long-range ferromagnetic ordering. Beyond this important finding we present further indications, from which we infer that gallium vacancies likely cause the striking ferromagnetic coupling of colossal magnetic moments in GaN:Gd. The second application deals with the phase-change material germanium

  1. Decentralized Adaptive Control of Large-Scale Non-Affine Nonlinear Time-Delay Systems Using Wavelet Neural Networks

    Directory of Open Access Journals (Sweden)

    Elaheh Saeedi

    2014-07-01

    Full Text Available In this paper, a decentralized adaptive controller with using wavelet neural network is used for a class of large-scale nonlinear systems with time- delay unknown nonlinear non- affine subsystems. The entered interruptions in subsystems are considered nonlinear with time delay, this is closer the reality, compared with the case in which the delay is not considered for interruptions. In this paper, the output weights of wavelet neural network and the other parameters of wavelet are adjusted online. The stability of close loop system is guaranteed with using the Lyapanov- Krasovskii method. Moreover the stability of close loop systems, guaranteed tracking error is converging to neighborhood zero and also all of the signals in the close loop system are bounded. Finally, the proposed method, simulated and applied for the control of two inverted pendulums that connected by a spring and the computer results, show that the efficiency of suggested method in this paper.

  2. Real-time nonlinear MPC and MHE for a large-scale mechatronic application

    DEFF Research Database (Denmark)

    Vukov, Milan; Gros, S.; Horn, G.

    2015-01-01

    Progress in optimization algorithms and in computational hardware made deployment of Nonlinear Model Predictive Control (NMPC) and Moving Horizon Estimation (MHE) possible to mechatronic applications. This paper aims to assess the computational performance of NMPC and MHE for rotational start...

  3. A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations

    Science.gov (United States)

    Cheng, Wanyou; Xiao, Yunhai; Hu, Qing-Jie

    2009-02-01

    In this paper, we propose a family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations. They come from two modified conjugate gradient methods [W.Y. Cheng, A two term PRP based descent Method, Numer. Funct. Anal. Optim. 28 (2007) 1217-1230; L. Zhang, W.J. Zhou, D.H. Li, A descent modified Polak-Ribiére-Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal. 26 (2006) 629-640] recently proposed for unconstrained optimization problems. Under appropriate conditions, the global convergence of the proposed method is established. Preliminary numerical results show that the proposed method is promising.

  4. Parallel Solvers for Finite-Difference Modeling of Large-Scale, High-Resolution Electromagnetic Problems in MRI

    Directory of Open Access Journals (Sweden)

    Hua Wang

    2008-01-01

    Full Text Available With the movement of magnetic resonance imaging (MRI technology towards higher field (and therefore frequency systems, the interaction of the fields generated by the system with patients, healthcare workers, and internally within the system is attracting more attention. Due to the complexity of the interactions, computational modeling plays an essential role in the analysis, design, and development of modern MRI systems. As a result of the large computational scale associated with most of the MRI models, numerical schemes that rely on a single computer processing unit often require a significant amount of memory and long computational times, which makes modeling of these problems quite inefficient. This paper presents dedicated message passing interface (MPI, OPENMP parallel computing solvers for finite-difference time-domain (FDTD, and quasistatic finite-difference (QSFD schemes. The FDTD and QSFD methods have been widely used to model/ analyze the induction of electric fields/ currents in voxel phantoms and MRI system components at high and low frequencies, respectively. The power of the optimized parallel computing architectures is illustrated by distinct, large-scale field calculation problems and shows significant computational advantages over conventional single processing platforms.

  5. A modified parallel tree code for N-body simulation of the Large Scale Structure of the Universe

    CERN Document Server

    Becciani, U

    2000-01-01

    N-body codes to perform simulations of the origin and evolution of the Large Scale Structure of the Universe have improved significantly over the past decade both in terms of the resolution achieved and of reduction of the CPU time. However, state-of-the-art N-body codes hardly allow one to deal with particle numbers larger than a few 10^7, even on the largest parallel systems. In order to allow simulations with larger resolution, we have first re-considered the grouping strategy as described in Barnes (1990) (hereafter B90) and applied it with some modifications to our WDSH-PT (Work and Data SHaring - Parallel Tree) code. In the first part of this paper we will give a short description of the code adopting the Barnes and Hut algorithm \\cite{barh86} (hereafter BH), and in particular of the memory and work distribution strategy applied to describe the {\\it data distribution} on a CC-NUMA machine like the CRAY-T3E system. In the second part of the paper we describe the modification to the Barnes grouping strate...

  6. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  7. An Adaptive Sliding Mode Tracking Controller Using BP Neural Networks for a Class of Large-scale Nonlinear Systems

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new type controller, BP neural-networks-based sliding mode controller is developed for a class of large-scale nonlinear systems with unknown bounds of high-order interconnections in this paper. It is shown that decentralized BP neural networks are used to adaptively learn the uncertainty bounds of interconnected subsystems in the Lyapunov sense, and the outputs of the decentralized BP neural networks are then used as the parameters of the sliding mode controller to compensate for the effects of subsystems uncertainties. Using this scheme, not only strong robustness with respect to uncertainty dynamics and nonlinearities can be obtained, but also the output tracking error between the actual output of each subsystem and the corresponding desired reference output can asymptotically converge to zero. A simulation example is presented to support the validity of the proposed BP neural-networks-based sliding mode controller.

  8. Adaptive variable structure control for large-scale time-delayed systems with unknown nonlinear dead-zone

    Institute of Scientific and Technical Information of China (English)

    Shen Qikun; Zhang Tianping

    2007-01-01

    The problem of adaptive fuzzy control for a class of large-scale, time-delayed systems with unknown nonlinear dead-zone is discussed here. Based on the principle of variable structure control, a design scheme of adaptive, decentralized, variable structure control is proposed. The approach removes the conditions that the dead-zone slopes and boundaries are equal and symmetric, respectively. In addition, it does not require that the assumptions that all parameters of the nonlinear dead-zone model and the lumped uncertainty are known constants. The adaptive compensation terms of the approximation errors are adopted to minimize the influence of modeling errors and parameter estimation errors. By theoretical analysis, the closed-loop control system is proved to be semiglobally uniformly ultimately bounded, with tracking errors converging to zero. Simulation results demonstrate the effectiveness of the approach.

  9. A note on nonlinear aspects of large-scale atmospheric instabilities

    Energy Technology Data Exchange (ETDEWEB)

    Wiin-Nielsen, A [The Royal Danish Academy of Sciences and Letters, Copenhagen (Denmark)

    2001-04-01

    A barotropic model with momentum transport, a simple baroclinic model with transports of heat only and a baroclinic model with both momentum and heat transports are used to illustrate the instabilities and the nonlinear longer term behavior of barotropic, baroclinic and mixed barotropic-baroclinic processes in models without forcing and frictional dissipation. While the instabilities of such models are well known thorough analytical studies of the linear perturbation equations, it is obvious that these studies give no information on the long term behavior which can be studied only by nonlinear equations. Numerical integration of low order nonlinear equations will be used to illustrate the long term behavior of the two models. The mains aspects of the observed atmospheric energy processes maybe reproduce by the most general low-order model used in this study give no information on the long term behavior, which can be studied only by nonlinear equations. Numerical integrations of low order nonlinear equations will be used to illustrate the long term behavior of the two models. The main aspects of the observed atmospheric energy processes may be reproduce by the most general low-order model used in this study. This model contains meridional transports of sensible heat and momentum by the Eddies which are included in the model. The reproduce the atmospheric energy diagram based on observational studies it is necessary to include heating and frictional dissipations. The latter two processes will be excluded in the present study in order to obtain the long term behavior is observed with rather large time scales. The most periodic variations are obtained from initial states with moderate values of the zonal parameters. [Spanish] Para ilustrar las inestabilidades y el comportamiento a largo plazo de procesos barotropicos-baroclinicos, mezclados en modelos sin disipaciones de friccion y forzamiento, se usan: un simple modelo baroclinico con transporte de momentum, un modelo

  10. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    Energy Technology Data Exchange (ETDEWEB)

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  11. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

    CERN Document Server

    Blanco Vinuela, Enrique; de Prada Moraga, Cesar

    2001-01-01

    The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

  12. Robust decentralized hybrid adaptive output feedback fuzzy control for a class of large-scale MIMO nonlinear systems and its application to AHS.

    Science.gov (United States)

    Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu

    2014-09-01

    This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.

  13. A Nonlinear Multiobjective Bilevel Model for Minimum Cost Network Flow Problem in a Large-Scale Construction Project

    Directory of Open Access Journals (Sweden)

    Jiuping Xu

    2012-01-01

    Full Text Available The aim of this study is to deal with a minimum cost network flow problem (MCNFP in a large-scale construction project using a nonlinear multiobjective bilevel model with birandom variables. The main target of the upper level is to minimize both direct and transportation time costs. The target of the lower level is to minimize transportation costs. After an analysis of the birandom variables, an expectation multiobjective bilevel programming model with chance constraints is formulated to incorporate decision makers’ preferences. To solve the identified special conditions, an equivalent crisp model is proposed with an additional multiobjective bilevel particle swarm optimization (MOBLPSO developed to solve the model. The Shuibuya Hydropower Project is used as a real-world example to verify the proposed approach. Results and analysis are presented to highlight the performances of the MOBLPSO, which is very effective and efficient compared to a genetic algorithm and a simulated annealing algorithm.

  14. Parallel Nonlinear Optimization for Astrodynamic Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — CU Aerospace proposes the development of a new parallel nonlinear program (NLP) solver software package. NLPs allow the solution of complex optimization problems,...

  15. Large-scale parallel configuration interaction. I. Nonrelativisticand scalar-relativistic general active space implementationwith application to (Rb-Ba)+

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2008-01-01

    is based on the message passing interface and a distributed data model in order to efficiently exploit key features of various modern computer architectures. We exemplify the nearly linear scalability of our parallel code in large-scale multireference configuration interaction (MRCI) calculations, and we...

  16. (Studies of ocean predictability at decade to century time scales using a global ocean general circulation model in a parallel competing environment). [Large Scale Geostrophic Model

    Energy Technology Data Exchange (ETDEWEB)

    1992-03-10

    The first phase of the proposed work is largely completed on schedule. Scientists at the San Diego Supercomputer Center (SDSC) succeeded in putting a version of the Hamburg isopycnal coordinate ocean model (OPYC) onto the INTEL parallel computer. Due to the slow run speeds of the OPYC on the parallel machine, another ocean is being model used during the first part of phase 2. The model chosen is the Large Scale Geostrophic (LSG) model form the Max Planck Institute.

  17. Nonlinear parallel momentum transport in strong turbulence

    CERN Document Server

    Wang, Lu; Diamond, P H

    2015-01-01

    Most existing theoretical studies of momentum transport focus on calculating the Reynolds stress based on quasilinear theory, without considering the \\emph{nonlinear} momentum flux-$$. However, a recent experiment on TORPEX found that the nonlinear toroidal momentum flux induced by blobs makes a significant contribution as compared to the Reynolds stress [Labit et al., Phys. Plasmas {\\bf 18}, 032308 (2011)]. In this work, the nonlinear parallel momentum flux in strong turbulence is calculated by using three dimensional Hasegawa-Mima equation. It is shown that nonlinear diffusivity is smaller than quasilinear diffusivity from Reynolds stress. However, the leading order nonlinear residual stress can be comparable to the quasilinear residual stress, and so could be important to intrinsic rotation in tokamak edge plasmas. A key difference from the quasilinear residual stress is that parallel fluctuation spectrum asymmetry is not required for nonlinear residual stress.

  18. Fault diagnosis of nonlinear and large-scale processes using novel modified kernel Fisher discriminant analysis approach

    Science.gov (United States)

    Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng

    2016-04-01

    It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.

  19. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order

  20. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm

  1. Large Scale Earth’s Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    Indian Academy of Sciences (India)

    Suleiman Baraka

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth’s bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth’s bow shock position is found to be $\\approx 14.8 R_{{\\rm E}}$ along the Sun–Earth line, and $\\approx 29 R_{{\\rm E}}$ on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured $\\approx$2$c/\\omega_{pi}$ for $ \\Theta_{Bn}=90^{\\circ}$ and $M_{{\\rm MS}} = 4.7 $) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be ($1.7 c/ \\omega_{pi} $). In the foreshocked region, the thermal velocity is found equal to 213 km $s^{−1}$ at $15R_{{\\rm E}}$ and is equal to $63 km s^{-1}$ at $12 R_{{\\rm E}}$ (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  2. Large Scale Earth's Bow Shock with Northern IMF as simulated by PIC code in parallel with MHD model

    CERN Document Server

    Baraka, Suleiman M

    2016-01-01

    In this paper, we propose a 3D kinetic model (Particle-in-Cell PIC ) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled Solar wind ( SW ) and ( IMF ) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ~14.8 RE along the Sun-Earth line, and ~ 29 RE on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted...

  3. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    Science.gov (United States)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network

  4. Parallel data-driven decomposition algorithm for large-scale datasets: with application to transitional boundary layers

    Science.gov (United States)

    Sayadi, Taraneh; Schmid, Peter J.

    2016-10-01

    Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.

  5. Transient and stability analysis of large scale rotor-bearing system with strong nonlinear elements by the mode summation-transfer matrix method

    Science.gov (United States)

    Gu, Zhiping

    This paper extends Riccati transfer matrix method to the transient and stability analysis of large scale rotor-bearing systems with strong nonlinear elements, and proposes a mode summation-transfer matrix method, in which the field transfer matrix of a distributed mass uniform shaft segment is obtained with the aid of the idea of mode summation and Newmark beta formulation, and the Riccati transfer matrix method is adopted to stablize the boundary value problem of the nonlinear systems. In this investigation, the real nonlinearity of the strong nonlinear elements is considered, not linearized, and the advantages of the Riccati transfer matrix are retained. So, this method is especially applicable to analyze the transient response and stability of large-scale rotor-bear systems with strong nonlinear elements. One example, a single-spool rotating system with strong nonlinear elements, is given. The obtained results show that this method is superior to that of Gu and Chen (1990) in accuracy, stability, and economy.

  6. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  7. A parallel fast multipole BEM and its applications to large-scale analysis of 3-D fiber-reinforced composites

    Institute of Scientific and Technical Information of China (English)

    Ting Lei; Zhenhan Yao; Haitao Wang; Pengbo Wang

    2006-01-01

    In this paper, an adaptive boundary element method (BEM) is presented for solving 3-D elasticity problems. The numerical scheme is accelerated by the new version of fast multipole method (FMM) and parallelized on distributed memory architectures. The resulting solver is applied to the study of representative volume element (RVE)for short fiberreinforced composites with complex inclusion geometry. Numerical examples performed on a 32-processor cluster show that the proposed method is both accurate and efficient. And can solve problems of large size that are challenging to existing state-of-the-art domain methods.

  8. Model Order Reduction of Large-Scale Finite Element Systems in an MPI Parallelized Environment for Usage in Multibody Simulation

    Directory of Open Access Journals (Sweden)

    Volzer Thomas

    2016-12-01

    Full Text Available The use of elastic bodies within a multibody simulation became more and more important within the last years. To include the elastic bodies, described as a finite element model in multibody simulations, the dimension of the system of ordinary differential equations must be reduced by projection. For this purpose, in this work, the modal reduction method, a component mode synthesis based method and a moment-matching method are used. Due to the always increasing size of the non-reduced systems, the calculation of the projection matrix leads to a large demand of computational resources and cannot be done on usual serial computers with available memory. In this paper, the model reduction software Morembs++ is presented using a parallelization concept based on the message passing interface to satisfy the need of memory and reduce the runtime of the model reduction process. Additionally, the behaviour of the Block-Krylov-Schur eigensolver, implemented in the Anasazi package of the Trilinos project, is analysed with regard to the choice of the size of the Krylov base, the block size and the number of blocks. Besides, an iterative solver is considered within the CMS-based method.

  9. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix, based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.

  10. PANET: a GPU-based tool for fast parallel analysis of robustness dynamics and feed-forward/feedback loop structures in large-scale biological networks.

    Science.gov (United States)

    Trinh, Hung-Cuong; Le, Duc-Hau; Kwon, Yung-Keun

    2014-01-01

    It has been a challenge in systems biology to unravel relationships between structural properties and dynamic behaviors of biological networks. A Cytoscape plugin named NetDS was recently proposed to analyze the robustness-related dynamics and feed-forward/feedback loop structures of biological networks. Despite such a useful function, limitations on the network size that can be analyzed exist due to high computational costs. In addition, the plugin cannot verify an intrinsic property which can be induced by an observed result because it has no function to simulate the observation on a large number of random networks. To overcome these limitations, we have developed a novel software tool, PANET. First, the time-consuming parts of NetDS were redesigned to be processed in parallel using the OpenCL library. This approach utilizes the full computing power of multi-core central processing units and graphics processing units. Eventually, this made it possible to investigate a large-scale network such as a human signaling network with 1,609 nodes and 5,063 links. We also developed a new function to perform a batch-mode simulation where it generates a lot of random networks and conducts robustness calculations and feed-forward/feedback loop examinations of them. This helps us to determine if the findings in real biological networks are valid in arbitrary random networks or not. We tested our plugin in two case studies based on two large-scale signaling networks and found interesting results regarding relationships between coherently coupled feed-forward/feedback loops and robustness. In addition, we verified whether or not those findings are consistently conserved in random networks through batch-mode simulations. Taken together, our plugin is expected to effectively investigate various relationships between dynamics and structural properties in large-scale networks. Our software tool, user manual and example datasets are freely available at http://panet-csc.sourceforge.net/.

  11. PANET: a GPU-based tool for fast parallel analysis of robustness dynamics and feed-forward/feedback loop structures in large-scale biological networks.

    Directory of Open Access Journals (Sweden)

    Hung-Cuong Trinh

    Full Text Available It has been a challenge in systems biology to unravel relationships between structural properties and dynamic behaviors of biological networks. A Cytoscape plugin named NetDS was recently proposed to analyze the robustness-related dynamics and feed-forward/feedback loop structures of biological networks. Despite such a useful function, limitations on the network size that can be analyzed exist due to high computational costs. In addition, the plugin cannot verify an intrinsic property which can be induced by an observed result because it has no function to simulate the observation on a large number of random networks. To overcome these limitations, we have developed a novel software tool, PANET. First, the time-consuming parts of NetDS were redesigned to be processed in parallel using the OpenCL library. This approach utilizes the full computing power of multi-core central processing units and graphics processing units. Eventually, this made it possible to investigate a large-scale network such as a human signaling network with 1,609 nodes and 5,063 links. We also developed a new function to perform a batch-mode simulation where it generates a lot of random networks and conducts robustness calculations and feed-forward/feedback loop examinations of them. This helps us to determine if the findings in real biological networks are valid in arbitrary random networks or not. We tested our plugin in two case studies based on two large-scale signaling networks and found interesting results regarding relationships between coherently coupled feed-forward/feedback loops and robustness. In addition, we verified whether or not those findings are consistently conserved in random networks through batch-mode simulations. Taken together, our plugin is expected to effectively investigate various relationships between dynamics and structural properties in large-scale networks. Our software tool, user manual and example datasets are freely available at http://panet-csc.sourceforge.net/.

  12. Effects of torsional degree of freedom, geometric nonlinearity, and gravity on aeroelastic behavior of large-scale horizontal axis wind turbine blades under varying wind speed conditions

    DEFF Research Database (Denmark)

    Jeong, Min-Soo; Cha, Myung-Chan; Kim, Sang-Woo

    2014-01-01

    Modern horizontal axis wind turbine blades are long, slender, and flexible structures that can undergo considerable deformation, leading to blade failures (e.g., blade-tower collision). For this reason, it is important to estimate blade behaviors accurately when designing large-scale wind turbines....... In this study, a numerical analysis considering blade torsional degree of freedom, geometric nonlinearity, and gravity was utilized to examine the effects of these factors on the aeroelastic blade behavior of a large-scale horizontal axis wind turbine. The results predicted that flapwise deflection is mainly...... affected by the torsional degree of freedom, which causes the blade bending deflections to couple to torsional deformation, thereby varying the aerodynamic loads through changes in the effective angle of attack. Edgewise deflection and torsional deformation are mostly influenced by the periodic...

  13. Large-scale parallel configuration interaction. I. Nonrelativistic and scalar-relativistic general active space implementation with application to (Rb-Ba)+.

    Science.gov (United States)

    Knecht, Stefan; Jensen, Hans Jorgen Aa; Fleig, Timo

    2008-01-07

    We present a parallel implementation of a string-driven general active space configuration interaction program for nonrelativistic and scalar-relativistic electronic-structure calculations. The code has been modularly incorporated in the DIRAC quantum chemistry program package. The implementation is based on the message passing interface and a distributed data model in order to efficiently exploit key features of various modern computer architectures. We exemplify the nearly linear scalability of our parallel code in large-scale multireference configuration interaction (MRCI) calculations, and we discuss the parallel speedup with respect to machine-dependent aspects. The largest sample MRCI calculation includes 1.5x10(9) Slater determinants. Using the new code we determine for the first time the full short-range electronic potentials and spectroscopic constants for the ground state and for eight low-lying excited states of the weakly bound molecular system (Rb-Ba)+ with the spin-orbit-free Dirac formalism and using extensive uncontracted basis sets. The time required to compute to full convergence these electronic states for (Rb-Ba)+ in a single-point MRCI calculation correlating 18 electrons and using 16 cores was reduced from more than 10 days to less than 1 day.

  14. ROBUST SLIDING MODE DECENTRALIZED CONTROL FOR A CLASS OF NONLINEAR INTERCONNECTED LARGE-SCALE SYSTEM WITH NEURAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    CHENMou; JIANGChang-sheng; CHENWen-hua

    2004-01-01

    A new decentralized robust control method is discussed for a class of nonlinear interconnected largescale system with unknown bounded disturbance and unknown nonlinear function term. A decentralized control law is proposed which combines the approximation method of neural network with sliding mode control. The decentralized controller consists of an equivalent controller and an adaptive sliding mode controller. The sliding mode controller is a robust controller used to reduce the track error of the control system. The neural networks are used to approximate the unknown nonlinear functions, meanwhile the approximation errors of the neural networks are applied to the weight value updated law to improve performance of the system. Finally, an example demonstrates the availability of the decentralized control method.

  15. Response attenuation in a large-scale structure subjected to blast excitation utilizing a system of essentially nonlinear vibration absorbers

    Science.gov (United States)

    Wierschem, Nicholas E.; Hubbard, Sean A.; Luo, Jie; Fahnestock, Larry A.; Spencer, Billie F.; McFarland, D. Michael; Quinn, D. Dane; Vakakis, Alexander F.; Bergman, Lawrence A.

    2017-02-01

    Limiting peak stresses and strains in a structure subjected to high-energy, short-duration transient loadings, such as blasts, is a challenging problem, largely due to the well-known insensitivity of the first few cycles of the structural response to damping. Linear isolation, while a potential solution, requires a very low fundamental natural frequency to be effective, resulting in large nearly-rigid body displacement of the structure, while linear vibration absorbers have little or no effect on the early-time response where relative motions, and thus stresses and strains, are at their highest levels. The problem has become increasingly important in recent years with the expectation of blast-resistance as a design requirement in new construction. In this paper, the problem is examined experimentally and computationally in the context of offset-blast loading applied to a custom-built nine story steel frame structure. A fully-passive response mitigation system consisting of six lightweight, essentially nonlinear vibration absorbers (termed nonlinear energy sinks - NESs) is optimized and deployed on the upper two floors of this structure. Two NESs have vibro-impact nonlinearities and the other four possess smooth but essentially nonlinear stiffnesses. Results of the computational and experimental study demonstrate the efficacy of the proposed passive nonlinear mitigation system to rapidly and efficiently attenuate the global structural response, even at early time (i.e., starting at the first response cycle), thus minimizing the peak demand on the structure. This is achieved by nonlinear redistribution of the blast energy within the modal space through low-to-high energy scattering due to the action of the NESs. The experimental results validate the theoretical predictions.

  16. Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters

    KAUST Repository

    Wu, X.

    2011-07-18

    The NAS Parallel Benchmarks (NPB) are well-known applications with fixed algorithms for evaluating parallel systems and tools. Multicore clusters provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node, and MPI can be used with the communication between nodes. In this paper, we use Scalar Pentadiagonal (SP) and Block Tridiagonal (BT) benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore clusters, Intrepid (BlueGene/P) at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76 %, and the hybrid BT outperforms the MPI BT by up to 8.58 % on up to 10 000 cores on Intrepid and Jaguar. We also use performance tools and MPI trace libraries available on these clusters to further investigate the performance characteristics of the hybrid SP and BT. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.

  17. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  18. Parallel Solutions for Large—Scale General Sparse Nonlinear Systems of Equations

    Institute of Scientific and Technical Information of China (English)

    胡承毅

    1996-01-01

    In solving application problems,many large-scale nonlinear systems of equaions result in sparse Jacobian matrices.Such nonlinear systems are called sparse nonlinear systems.The irregularity of the locations of nonzrero elements of a general sparse matrix makes it very difficult to generally map sparse matrix computations to multiprocessors for parallel processing in a well balanced manner.To overcome this difficulty,we define a new storage scheme for general sparse matrices in this paper,With the new storage scheme,we develop parallel algorithms to solve large-scale general sparse systems of equations by interval Newton/Generalized bisection methods which reliably find all numerical solutions within a given domain.I n Section 1,we provide an introduction to the addressed problem and the interval Newton's methods.In Section 2,some currently used storage schemes for sparse systems are reviewed.In Section 3,new index schemes to store general sparse matrices are reported.In Section 4,we present a parallel algorithm to evaluate a general sparse Jacobian matrix.In Section 5,we present a parallel algorithm to solve the corresponding interval linear system by the all-row preconditioned scheme.Conclusions and future work are discussed in Section 6.

  19. Block and parallel modelling of broad domain nonlinear continuous mapping based on NN

    Institute of Scientific and Technical Information of China (English)

    Yang Guowei; Tu Xuyan; Wang Shoujue

    2006-01-01

    The necessity of the use of the block and parallel modeling of the nonlinear continuous mappings with NN is firstly expounded quantitatively. Then, a practical approach for the block and parallel modeling of the nonlinear continuous mappings with NN is proposed. Finally, an example indicating that the method raised in this paper can be realized by suitable existed software is given. The results of the experiment of the model discussed on the 3-D Mexican straw hat indicate that the block and parallel modeling based on NN is more precise and faster in computation than the direct ones and it is obviously a concrete example and the development of the large-scale general model established by Tu Xuyan.

  20. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data

    Science.gov (United States)

    Katta, Mohan A. V. S. K.; Khan, Aamir W.; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K.

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data. PMID:26460497

  1. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    Science.gov (United States)

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data.

  2. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina Data.

    Directory of Open Access Journals (Sweden)

    Mohan A V S K Katta

    Full Text Available Rapid popularity and adaptation of next generation sequencing (NGS approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1 (http://htslib.org, for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2, SNP calling (SAMtools and other utilities (bedtools towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina data.

  3. Optimal Control for Nonlinear Interconnected Large-scale Systems: A Successive Approximation Approach%非线性互联大系统的最优控制:逐次逼近法

    Institute of Scientific and Technical Information of China (English)

    唐功友; 孙亮

    2005-01-01

    The optimal control problem for nonlinear interconnected large-scale dynamic systems is considered. A successive approximation approach for designing the optimal controller is proposed with respect to quadratic performance indexes. By using the approach, the high order, coupling,nonlinear two-point boundary value (TPBV) problem is transformed into a sequence of linear decoupling TPBV problems. It is proven that the TPBV problem sequence uniformly converges to the optimal control for nonlinear interconnected large-scale systems. A suboptimal control law is obtained by using a finite iterative result of the optimal control sequence.

  4. JASMIN-based Massive Parallel Computing of Large Scale Groundwater Flow%基于JASMIN的地下水流大规模并行数值模拟

    Institute of Scientific and Technical Information of China (English)

    程汤培; 莫则尧; 邵景力

    2013-01-01

    针对具有精细网格剖分、长时间跨度特征的地下水流模拟中计算时间长、存储开销大等瓶颈问题,基于MODFLOW三维非稳定流计算方法,提出基于网格片的核心算法以及基于影像区的通信机制,并在JASMIN框架上研制了大规模地下水流并行数值模拟程序JOGFLOW.通过河南郑州市中牟县雁鸣湖水源地地下水流的模拟,对程序正确性和性能进行了验证;通过建立一个具有精细网格剖分的假想地下水概念模型对可扩展性进行测试.相对于32核的并行程序,在512以及1024个处理机上的并行效率分别可达77.2%和67.5%.数值模拟结果表明,JOGFLOW具有较好的计算性能与可扩展性,能够有效使用数百上千计算核心,支持千万量级以上网格剖分的地下水流模型的大规模并行计算.%To overcome prohibitive cost in computational time and memory requirement in simulating groundwater flow models with detailed spatial discretization and long time period,we present an efficient massive parallel-computing program JOGFLOW for large scale groundwater flow simulation.In the program,groundwater flow process in MODFLOW is re-implemented on JASMIN by designing patch-based algorithms as well as using communication method based on adding ghost cells to each patch.Accuracy and efficiency of JOGFLOW are demonstrated in modeling a field flow located at Yanming Lake in Zhengzhou of Henan province.Parallel scalability is measured by simulating a hypothetic groundwater flow problem with much detailed spatial discretization.Compared to 32 cores,the parallel efficiency reaches 77.2% and 67.5% on 512 and 1 024 processors,respectively.Numerical modeling demonstrates good performance and scalability of JOGFLOW,which enables to support groundwater flow simulation with tens of millions of computational cells through massive parallel computing on hundreds or thousands of CPU cores.

  5. Index analysis and numerical solution of a large scale nonlinear PDAE system describing the dynamical behaviour of molten carbonate fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Chudej, K.; Petzet, V.; Scherdel, S.; Pesch, H.J. [Univ. Bayreuth, Lehrstuhl fuer Ingenieurmathematik (Germany); Heidebrecht, P. [Univ. Magdeburg, Lehrstuhl fuer Systemverfahrenstechnik (Germany); Schittkowski, K. [Univ. Bayreuth, Fachgruppe Informatik (Germany); Sundmacher, K. [Univ. Magdeburg, Lehrstuhl fuer Systemverfahrenstechnik (Germany); Max-Planck-Inst. fuer Dynamik Komplexer Technischer Systeme, Magdeburg (Germany)

    2005-02-01

    This paper deals with the efficient simulation of the dynamical behaviour of molten carbonate fuel cells (MCFCs). MCFCs allow an efficient and environmentally friendly energy production via electrochemical reactions. Their dynamics can be described by large scale systems of up to currently 22 nonlinear partial differential algebraic equations (PDAE). The paper also serves as a basis for later parameter identification and optimal control purposes. Therefore, the numerical simulations are particularly based on hierarchically embedded systems of PDAE, first of all in one space dimension. The PDAE are of mixed parabolic-hyperbolic type and are completed by nonlinear initial and boundary conditions of mixed type. For a series of embedded models in one space dimension, the vertical method of lines (MOL) is used throughout this paper. For the semi-discretization in space appropriate difference schemes are applied depending on the type of equations. The resulting system of ordinary differential algebraic equations (DAE) in time is then solved by a standard RADAU5 method. In order to justify the numerical procedure, a detailed index analysis of the PDAE systems with respect to time index, spatial index and MOL index is carried through. Because of the nonlinearity of the PDAE system, the existing theory has to be generalized. Moreover, MOL is especially suited for near optimal real time control on the basis of a sensitivity analysis of the semi-discretized DAE system, since a theoretically safeguarded sensitivity analysis does not exist so far for PDAE constrained optimal control problems of the above type. Numerical results complete the paper and show their correspondence with the expected dynamical behaviour of MCFCs. (orig.)

  6. Asynchronous parallel pattern search for nonlinear optimization

    Energy Technology Data Exchange (ETDEWEB)

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  7. Large-scale circuit simulation

    Science.gov (United States)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  8. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  9. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Science.gov (United States)

    Vercelloni, Julie; Caley, M Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie

    2014-01-01

    Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  10. General difference schemes with intrinsic parallelism for nonlinear parabolic systems

    Institute of Scientific and Technical Information of China (English)

    周毓麟; 袁光伟

    1997-01-01

    The boundary value problem for nonlinear parabolic system is solved by the finite difference method with intrinsic parallelism. The existence of the discrete vector solution for the general finite difference schemes with intrinsic parallelism is proved by the fixed-point technique in finite-dimensional Euclidean space. The convergence and stability theorems of the discrete vector solutions of the nonlinear difference system with intrinsic parallelism are proved. The limitation vector function is just the unique generalized solution of the original problem for the parabolic system.

  11. Improving the parallel performance of a domain decomposition preconditioning technique in the Jacobi-Davidson method for large scale eigenvalue problems

    NARCIS (Netherlands)

    M. Genseberger (Menno)

    2008-01-01

    htmlabstractMost computational work in Jacobi-Davidson [9], an iterative method for large scale eigenvalue problems, is due to a so-called correction equation. In [5] a strategy for the approximate solution of the correction equation was proposed. This strategy is based on a domain decomposition

  12. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  13. Parallel Evolutionary Modeling for Nonlinear Ordinary Differential Equations

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We introduce a new parallel evolutionary algorithm in modeling dynamic systems by nonlinear higher-order ordinary differential equations (NHODEs). The NHODEs models are much more universal than the traditional linear models. In order to accelerate the modeling process, we propose and realize a parallel evolutionary algorithm using distributed CORBA object on the heterogeneous networking. Some numerical experiments show that the new algorithm is feasible and efficient.

  14. New 3D parallel GILD electromagnetic modeling and nonlinear inversion using global magnetic integral and local differential equation

    Energy Technology Data Exchange (ETDEWEB)

    Xie, G.; Li, J.; Majer, E.; Zuo, D.

    1998-07-01

    This paper describes a new 3D parallel GILD electromagnetic (EM) modeling and nonlinear inversion algorithm. The algorithm consists of: (a) a new magnetic integral equation instead of the electric integral equation to solve the electromagnetic forward modeling and inverse problem; (b) a collocation finite element method for solving the magnetic integral and a Galerkin finite element method for the magnetic differential equations; (c) a nonlinear regularizing optimization method to make the inversion stable and of high resolution; and (d) a new parallel 3D modeling and inversion using a global integral and local differential domain decomposition technique (GILD). The new 3D nonlinear electromagnetic inversion has been tested with synthetic data and field data. The authors obtained very good imaging for the synthetic data and reasonable subsurface EM imaging for the field data. The parallel algorithm has high parallel efficiency over 90% and can be a parallel solver for elliptic, parabolic, and hyperbolic modeling and inversion. The parallel GILD algorithm can be extended to develop a high resolution and large scale seismic and hydrology modeling and inversion in the massively parallel computer.

  15. Nonlinear waves in the terrestrial quasi-parallel foreshock

    CERN Document Server

    Hnat, B; O'Connell, D; Nakariakov, V M; Rowlands, G

    2016-01-01

    We study the applicability of the derivative nonlinear Schr\\"{o}dinger (DNLS) equation, for the evolution of high frequency nonlinear waves, observed at the foreshock region of the terrestrial quasi-parallel bow shock. The use of a pseudo-potential is elucidated and, in particular, the importance of canonical representation in the correct interpretation of solutions in this formulation is discussed. Numerical solutions of the DNLS equation are then compared directly with the wave forms observed by Cluster spacecraft. Non harmonic slow variations are filtered out by applying the empirical mode decomposition. We find large amplitude nonlinear wave trains at frequencies above the proton cyclotron frequency, followed in time by nearly harmonic low amplitude fluctuations. The approximate phase speed of these nonlinear waves, indicated by the parameters of numerical solutions, is of the order of the local Alfv\\'{e}n speed.

  16. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  17. Cache-style Parallel Checkpointing for Large-scale Computing System%面向大规模计算系统的Cache式并行检查点

    Institute of Scientific and Technical Information of China (English)

    刘勇燕; 刘勇鹏; 冯华; 迟万庆

    2011-01-01

    Checkpointing is a typical technique for fault tolerance, whereas its scalability is limited by the overhead of file access. According to the multi-level file system architecture, the cache-style parallel checkpointing was introduced,which translates global coordinated checkpointing into local file operation by out-of-order pipelining of checkpoint flushing opportunity. The overhead of write-back is hidden effectively to increase the performance and the scalability of parallel checkpointing.%检查点机制是高性能并行计算系统中重要的容错手段,随着系统规模的增大,并行检查点的可扩展性受文件访问的制约.针对大规模并行计算系统的多级文件系统结构,提出了cache式并行检查点技术.它将全局同步并行检查点转化为局部文件操作,并利用多处理器结构进行乱序流水线式写回调度,将检查点的写回时机合理分布,从而有效地隐藏了检查点的写回开销,保证了并行检查点文件访问的高性能和高可扩展性.

  18. 一种基于PETSc的热传导方程大规模并行求解策略%Parallel-computing Strategy for Large-scale Heat Equation Based on PETSc

    Institute of Scientific and Technical Information of China (English)

    程汤培; 王群

    2009-01-01

    提出了一种大规模热传导方程并行求解的策略,采用了分布式内存和压缩矩阵技术解决超大规模稀疏矩阵的存储及其计算,整合了多种Krylov子空间方法和预条件子技术来并行求解大规模线性方程组,基于面向对象设计实现了具体应用与算法的低耦合.在Linux机群系统上进行了性能测试,程序具有良好的加速比和计算性能.%A parallel-computing strategy was presented to solve the large-scale heat equations.The distributed memory and compressed matrices technology was adopted for both the process of storage and evaluation of large-scale sparse matrices.All kinds of Krylov subspace methods and preconditioners were introduced to assemble and solve the linear systems of equations.The code implementation of this strategy was written in high-level abstractions based on object-o-riented technology which promotes code reuse, flexibility and helps to decouple issues of parallelism from algorithm choices.The experiments carried on Linux clusters demonstrate that this strategy has achieved desirable speedup and ef-ficiency.

  19. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  20. 大规模并行时域有限差分法电磁计算研究%Study on Large-Scale Parallel Finite-Difference Time-Domain Method for Electromagnetic Computation

    Institute of Scientific and Technical Information of China (English)

    江树刚; 张玉; 赵勋旺

    2015-01-01

    基于我国超级计算机平台,开展了大规模并行时域有限差分法(Finite-Difference Time-Domain FDTD)的性能和应用研究.在我国首台百万亿次"魔方"超级计算机、具有国产CPU的"神威蓝光"超级计算机和当前排名世界第一的"天河二号"超级计算机上就并行FDTD方法的并行性能进行了测试,并分别突破了10000 CPU核,100000 CPU核和300000 CPU核的并行规模.在不同测试规模下,该算法的并行效率均达到了50%以上,表明了本文并行算法具有良好的可扩展性.通过仿真分析多个微带天线阵的辐射特性和某大型飞机的散射特性,表明本文方法可以在不同架构的超级计算机上对复杂电磁问题进行精确高效电磁仿真.%The study of performance and applications of the large-scale parallel Finite-Difference Time-Domain (FDTD) method is carried out on the China-made supercomputer platforms. The parallel efficiency is tested on Magic Cube supercomputer that ranked the 10th fastest supercomputer in the world in Nov. 2008, Sunway BlueLight MPP supercomputer that is the first large scale parallel supercomputer with China-own-made CPUs, and Tianhe-2 supercomputer that ranked the 1st in the world currently, and the algorithm is implemented on the three supercomputers using maximum number of CPU cores of 10000, 100000 and 300000, respectively. The parallel efficiency reaches up to 50%, which indicates a good scalability of the method in this paper. Multiple microstrip antenna arrays and a large airplane are computed to demonstrate that the parallel FDTD can be applied for accurate and efficient simulation of complicated electromagnetic problems on supercomputer with different architectures.

  1. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  2. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  3. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K

    2008-01-01

    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The probl

  4. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K

    2008-01-01

    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The

  5. ELASTIC: A Large Scale Dynamic Tuning Environment

    Directory of Open Access Journals (Sweden)

    Andrea Martínez

    2014-01-01

    Full Text Available The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.

  6. 大规模面向对象有限元程序的并行性能监测%Parallel performance profile of large-scale object-oriented finite element software

    Institute of Scientific and Technical Information of China (English)

    王海兵

    2011-01-01

    通过重载MPI消息传递函数,在重载的MPI函数中调用MPE库中各日志记录函数,实现了大规模面向对象有限元程序自定义并行性能监测.对一个典型冲击动力学问题进行了16 CPU的并行有限元模拟,通过并行性能监测对其有限元并行算法进行了分析.%MPI functions in the large-scale object-oriented finite element software were overloaded to cany out user defined parallel performance profiling. Log functions in Multi-Processing Environment (MPE) library were inserted to MPI functions when the MPI functions were overloaded. A typical impact simulation has been carried out using 16 CPUs, and the simulation process was monitored. Parallel finite element algorithm which was used in the simulation has been evaluated by analyzing the monitor results.

  7. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  8. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  9. Large-scale structure of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Ya.B. (Inst. Prikladnoj Matematiki, Moscow, USSR)

    1983-01-01

    A review of theory of the large-scale structure of the Universe is given, including formation of clusters and superclusters of galaxies as well as large voids. Particular attention is paid to the theory of neutrino dominated Universe - the cosmological model where neutrinos with the rest mass of several tens eV dominate the mean density. Evolution of small perturbations is discussed, estimates of microwave backgorund radiation fluctuations is given for different angular scales. Adiabatic theory of the Universe structure formation, known as ''cake'' scenario and their successive fragmentation is given. This scenario is based on approximate nonlinear theory of gravitation instability. Results of numerical experiments, modeling the processes of large-scale structure formation are discussed.

  10. Large-scale structure of the universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Y.B.

    1983-01-01

    A survey is given of theories for the origin of large-scale structure in the universe: clusters and superclusters of galaxies, and vast black regions practically devoid of galaxies. Special attention is paid to the theory of a neutrino-dominated universe: a cosmology in which electron neutrinos with a rest mass of a few tens of electron volts would contribute the bulk of the mean density. The evolution of small perturbations is discussed, and estimates are made for the temperature anisotropy of the microwave background radiation on various angular scales. The nonlinear stage in the evolution of smooth irrotational perturbations in a low-pressure medium is described in detail. Numerical experiments simulating large-scale structure formation processes are discussed, as well as their interpretation in the context of catastrophe theory.

  11. Large Scale Dynamos in Stars

    Science.gov (United States)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  12. Nonlinear evolution of parallel propagating Alfven waves: Vlasov - MHD simulation

    CERN Document Server

    Nariyuki, Y; Kumashiro, T; Hada, T

    2009-01-01

    Nonlinear evolution of circularly polarized Alfv\\'en waves are discussed by using the recently developed Vlasov-MHD code, which is a generalized Landau-fluid model. The numerical results indicate that as far as the nonlinearity in the system is not so large, the Vlasov-MHD model can validly solve time evolution of the Alfv\\'enic turbulence both in the linear and nonlinear stages. The present Vlasov-MHD model is proper to discuss the solar coronal heating and solar wind acceleration by Alfve\\'n waves propagating from the photosphere.

  13. Very Large Scale Integration (VLSI).

    Science.gov (United States)

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  14. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    Science.gov (United States)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  15. Large-scale instabilities of helical flows

    CERN Document Server

    Cameron, Alexandre; Brachet, Marc-Étienne

    2016-01-01

    Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the scaling $\\sigma \\propto q\\; Re\\,$ predicted by the $AKA$ effect theory [U. Frisch, Z. S. She, and P. L. Sulem, Physica D: Nonlinear Phenomena 28, 382 (1987)] is recovered for $Re\\ll 1$ as expected (with most of the energy of the unstable mode concentrated in the large scales). However, as $Re$ increases, the growth-rate is found to saturate and most of the energy is found at small scales. In the absence of \\AKA{} effect, it is found that flows can still have large-scale instabilities, but with a negative eddy-viscosity sca...

  16. Sharing of nonlinear load in parallel-connected three-phase converters

    DEFF Research Database (Denmark)

    Borup, Uffe; Blaabjerg, Frede; Enjeti, Prasad N.

    2001-01-01

    In this paper, a new control method is presented which enables equal sharing of linear and nonlinear loads in three-phase power converters connected in parallel, without communication between the converters. The paper focuses on solving the problem that arises when two converters with harmonic...... compensation are connected in parallel. Without the new solution, they are normally not able to distinguish the harmonic currents that flow to the load and harmonic currents that circulate between the converters. Analysis and experimental results on two 90-kVA 400-Hz converters in parallel are presented....... The results show that both linear and nonlinear loads can be shared equally by the proposed concept....

  17. Wave propagation in parallel-plate waveguides filled with nonlinear left-handed material

    Institute of Scientific and Technical Information of China (English)

    Burhan Zamir; Rashid Ali

    2011-01-01

    A theoretical investigation of field components for transverse electric mode in the parallel-plate waveguides has been studied. In this analysis two different types of waveguide structures have been discussed, i.e., (a) normal good/perfect conducting parallel-plate waveguide filled with nonlinear left-handed material and (b) high-temperature-superconducting parallel-plate waveguide filled with nonlinear left-handed material. The dispersion relations of transverse electric mode have also been discussed for these two types of waveguide structures.

  18. 大规模并行原子模拟在钛合金界面行为研究中的应用%Implementing Large-Scale Parallel Atomistic Simulations in the Investigation of Interfacial Behaviors in Titanium Alloys

    Institute of Scientific and Technical Information of China (English)

    王皞; 徐东生; 杨锐

    2013-01-01

    界面对钛合金的力学性能有至关重要的影响。界面行为的原子模拟涉及的原子数目庞大,必须借助大规模并行计算。本研究组开发了大规模并行分子动力学程序,并将其应用于钛合金中不同种类界面行为的模拟研究。本文以钛铝金属间化合物中的孪晶界和α钛中的特殊大角晶界为例,介绍研究组在钛合金晶界行为的计算模拟方面的近期研究成果。所模拟的体系尺寸达到微米级,所需 CPU 核数几十至几百不等。研究发现,钛铝模拟晶胞沿伪孪晶方向剪切变形时,等静压力下可产生 L11结构的伪孪晶形核长大,而等静张力下剪切可产生真孪晶的形核长大,提出钛铝中一种新的孪晶长大机制。在α钛中,特定取向的两个晶粒所形成的晶界与位错发生相互作用,裂纹形核依赖于加载外力的取向而发生在晶界处或硬取向晶粒内,从而可能导致疲劳断裂行为与加载取向相关。这些结果有助于理解钛合金的塑性变形行为,并为更高尺度的模拟研究提供了原子尺度细节。%The mechanical behavior of titanium alloys is often inlfuenced signiifcantly by interfaces. The atomistic investigation of interfaces corresponds with large numbers of atoms, hence requiring large-scale parallel simulations. A molecular dynamics code for such simulations is developed in our group, and used in the investigations of interfacial behaviors in titanium alloys. The present paper introduces our recent works on the simulations of interfacial behaviors in titanium alloys, with the coherent twin boundary in TiAl and a special large-angle grain boundary inα-titanium as two examples. The size of the simulated cells is around micrometers, using tens to hundreds of CPU cores. It is found that, in TiAl under shear along the pseudo-twin direction, pseudo-twin and true twin nucleates and grows under hydrostatic compression and tension respectively

  19. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  20. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  1. Testing gravity on Large Scales

    OpenAIRE

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  2. Strings and large scale magnetohydrodynamics

    CERN Document Server

    Olesen, P

    1995-01-01

    From computer simulations of magnetohydrodynamics one knows that a turbulent plasma becomes very intermittent, with the magnetic fields concentrated in thin flux tubes. This situation looks very "string-like", so we investigate whether strings could be solutions of the magnetohydrodynamics equations in the limit of infinite conductivity. We find that the induction equation is satisfied, and we discuss the Navier-Stokes equation (without viscosity) with the Lorentz force included. We argue that the string equations (with non-universal maximum velocity) should describe the large scale motion of narrow magnetic flux tubes, because of a large reparametrization (gauge) invariance of the magnetic and electric string fields.

  3. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  4. Kinetic treatment of nonlinear magnetized plasma motions - General geometry and parallel waves

    Science.gov (United States)

    Khabibrakhmanov, I. KH.; Galinskii, V. L.; Verheest, F.

    1992-01-01

    The expansion of kinetic equations in the limit of a strong magnetic field is presented. This gives a natural description of the motions of magnetized plasmas, which are slow compared to the particle gyroperiods and gyroradii. Although the approach is 3D, this very general result is used only to focus on the parallel propagation of nonlinear Alfven waves. The derivative nonlinear Schroedinger-like equation is obtained. Two new terms occur compared to earlier treatments, a nonlinear term proportional to the heat flux along the magnetic field line and a higher-order dispersive term. It is shown that kinetic description avoids the singularities occurring in magnetohydrodynamic or multifluid approaches, which correspond to the degenerate case of sound speeds equal to the Alfven speed, and that parallel heat fluxes cannot be neglected, not even in the case of low parallel plasma beta. A truly stationary soliton solution is derived.

  5. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  6. A Comparative Study on Different Parallel Solvers for Nonlinear Analysis of Complex Structures

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available The parallelization of 2D/3D software SAPTIS is discussed for nonlinear analysis of complex structures. A comparative study is made on different parallel solvers. The numerical models are presented, including hydration models, water cooling models, modulus models, creep model, and autogenous deformation models. A finite element simulation is made for the whole process of excavation and pouring of dams using these models. The numerical results show a good agreement with the measured ones. To achieve a better computing efficiency, four parallel solvers utilizing parallelization techniques are employed: (1 a parallel preconditioned conjugate gradient (PCG solver based on OpenMP, (2 a parallel preconditioned Krylov subspace solver based on MPI, (3 a parallel sparse equation solver based on OpenMP, and (4 a parallel GPU equation solver. The parallel solvers run either in a shared memory environment OpenMP or in a distributed memory environment MPI. A comparative study on these parallel solvers is made, and the results show that the parallelization makes SAPTIS more efficient, powerful, and adaptable.

  7. Parallel algorithm of trigonometric collocation method in nonlinear dynamics of rotors

    Directory of Open Access Journals (Sweden)

    Musil T.

    2007-11-01

    Full Text Available A parallel algorithm of a numeric procedure based on a method of trigonometric collocation is presented for investigating an unbalance response of a rotor supported by journal bearings. After a condensation process the trigonometric collocation method results in a set of nonlinear algebraic equations which is solved by the Newton-Raphson method. The order of the set is proportional to the number of nonlinear bearing coordinates and terms of the finite Fourier series. The algorithm, realized in the MATLAB parallel computing environment (DCT/DCE, uses message passing technique for interacting among processes on nodes of a parallel computer. This technique enables portability of the source code both on parallel computers with distributed and shared memory. Tests, made on a Beowulf cluster and a symmetric multiprocessor, have revealed very good speed-up and scalability of this algorithm.

  8. Large-Scale Tides in General Relativity

    CERN Document Server

    Ip, Hiu Yan

    2016-01-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lema\\^itre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation ...

  9. Large-Scale Galaxy Bias

    CERN Document Server

    Desjacques, Vincent; Schmidt, Fabian

    2016-01-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...

  10. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  11. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  12. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  13. Large Scale Correlation Clustering Optimization

    CERN Document Server

    Bagon, Shai

    2011-01-01

    Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic "model-selection" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications t...

  14. Nonlinearity for a parallel kinematic machine tool and its application to interpolation accuracy analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper is concerned with the kinematic nonlinearity measure of parallel kinematic machine tool (PKM),which depends upon differential geometry curvature.The nonlinearity can be described by the curve of the solution locus and the equal interval input of joints mapping into inequable interval output of the end-effectors.Such curing and inequation can be measured by BW curvature.So the curvature can measure the nonlinearity of PKM indirectly.Then the distribution of BW curvature in the local area and the whole workspace are also discussed.An example of application to the interpolation accuracy analysis of PKM is given to illustrate the effectiveness of this approach.

  15. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  16. Accelerated large-scale multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Lloyd Scott

    2011-12-01

    Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.

  17. A study of MLFMA for large-scale scattering problems

    Science.gov (United States)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  18. PARALLEL DYNAMIC ITERATION METHODS FOR SOLVING NONLINEAR TIME-PERIODIC DIFFERENTIAL-ALGEBRAIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Yao-lin Jiang

    2003-01-01

    In this paper we presented a convergence condition of parallel dynamic iteration methods for a nonlinear system of differential-algebraic equations with a periodic constraint.The convergence criterion is decided by the spectral expression of a linear operator derivedfrom system partitions. Numerical experiments given here confirm the theoretical work ofthe paper.

  19. Three-dimensional nonlinear conjugate gradient parallel inversion with full information of marine magnetotellurics

    Science.gov (United States)

    Zhang, Kun; Yan, Jiayong; Lü, Qingtian; Zhao, Jinhua; Hu, Hao

    2017-04-01

    A new inversion method using marine magnetotellurics is proposed based on previous studies using the nonlinear conjugate gradient method. A numerical example is used to verify the inversion algorithm and program. The inversion model and response resemble the synthetic model. Some technologies have been added to the inversion algorithm: parallel structure, terrain inversion and static shift correction.

  20. Using Python to Construct a Scalable Parallel Nonlinear Wave Solver

    KAUST Repository

    Mandli, Kyle

    2011-01-01

    Computational scientists seek to provide efficient, easy-to-use tools and frameworks that enable application scientists within a specific discipline to build and/or apply numerical models with up-to-date computing technologies that can be executed on all available computing systems. Although many tools could be useful for groups beyond a specific application, it is often difficult and time consuming to combine existing software, or to adapt it for a more general purpose. Python enables a high-level approach where a general framework can be supplemented with tools written for different fields and in different languages. This is particularly important when a large number of tools are necessary, as is the case for high performance scientific codes. This motivated our development of PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation, as a case-study for how Python can be used as a highlevel framework leveraging a multitude of codes, efficient both in the reuse of code and programmer productivity. We present scaling results for computations on up to four racks of Shaheen, an IBM BlueGene/P supercomputer at King Abdullah University of Science and Technology. One particularly important issue that PetClaw has faced is the overhead associated with dynamic loading leading to catastrophic scaling. We use the walla library to solve the issue which does so by supplanting high-cost filesystem calls with MPI operations at a low enough level that developers may avoid any changes to their codes.

  1. Nonlinearly-constrained optimization using asynchronous parallel generating set search.

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, Joshua D.; Kolda, Tamara Gibson

    2007-05-01

    Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.

  2. THE UNCONDITIONAL STABILITY OF PARALLEL DIFFERENCE SCHEMES WITH SECOND ORDER CONVERGENCE FOR NONLINEAR PARABOLIC SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Yuan Guangwei; Sheng Zhiqiang; Hang Xudeng

    2007-01-01

    For solving nonlinear parabolic equation on massive parallel computers,the construction of parallel difference schemes with simple design, high parallelism and unconditional stability and second order global accuracy in space, has long been desired.In the present work, a new kind of general parallel difference schemes for the nonlinear parabolic system is proposed. The general parallel difference schemes include, among others, two new parallel schemes. In one of them, to obtain the interface values on the interface of sub-domains an explicit scheme of Jacobian type is employed, and then the fully implicit scheme is used in the sub-domains. Here, in the explicit scheme of Jacobian type, the values at the points being adjacent to the interface points are taken as the linear combination of values of previous two time layers at the adjoining points of the inner interface. For the construction of another new parallel difference scheme,the main procedure is as follows. Firstly the linear combination of values of previous two time layers at the interface points among the sub-domains is used as the (Dirichlet)boundary condition for solving the sub-domain problems. Then the values in the subdomains are calculated by the fully implicit scheme. Finally the interface values are computed by the fully implicit scheme, and in fact these calculations of the last step are explicit since the values adjacent to the interface points have been obtained in the previous step. The existence, uniqueness, unconditional stability and the second order accuracy of the discrete vector solutions for the parallel difference schemes are proved.Numerical results are presented to examine the stability, accuracy and parallelism of the parallel schemes.

  3. Design and Nonlinear Control of a 2-DOF Flexible Parallel Humanoid Arm Joint Robot

    Directory of Open Access Journals (Sweden)

    Leijie Jiang

    2017-01-01

    Full Text Available The paper focuses on the design and nonlinear control of the humanoid wrist/shoulder joint based on the cable-driven parallel mechanism which can realize roll and pitch movement. In view of the existence of the flexible parts in the mechanism, it is necessary to solve the vibration control of the flexible wrist/shoulder joint. In this paper, a cable-driven parallel robot platform is developed for the experiment study of the humanoid wrist/shoulder joint. And the dynamic model of the mechanism is formulated by using the coupling theory of the flexible body’s large global motion and small flexible deformation. Based on derived dynamics, antivibration control of the joint robot is studied with a nonlinear control method. Finally, simulations and experiments were performed to validate the feasibility of the developed parallel robot prototype and the proposed control scheme.

  4. Large scale cluster computing workshop

    Energy Technology Data Exchange (ETDEWEB)

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  5. Large Scale Magnetostrictive Valve Actuator

    Science.gov (United States)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  6. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  7. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  8. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  9. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  10. A nonlinear theory of the parallel firehose and gyrothermal instabilities in a weakly collisional plasma

    CERN Document Server

    Rosin, M S; Rincon, F; Cowley, S C

    2010-01-01

    Plasmas have a natural tendency to develop pressure anisotropies with respect to the local direction of the magnetic field. These anisotropies trigger plasma instabilities at scales just above the ion Larmor radius with growth rates of a fraction of the ion cyclotron frequency - much faster than either the global dynamics or local turbulence. The instabilities can dramatically modify the macroscopic dynamics of the plasma. Nonlinear evolution of these instabilities is expected to drive pressure anisotropies towards marginal stability values, controlled by the plasma beta. This nonlinear evolution is worked out in an ab initio kinetic calculation for the simplest analytically tractable example - the parallel firehose instability in a high-beta plasma. A closed nonlinear equation for the firehose turbulence is derived and solved. In the nonlinear regime, the instability leads to secular (~t) growth of magnetic fluctuations. The fluctuations develop a k^{-3} spectrum, extending from scales somewhat larger than r...

  11. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  12. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    Science.gov (United States)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  13. Numerical characterization of nonlinear dynamical systems using parallel computing: The role of GPUS approach

    Science.gov (United States)

    Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis

    2016-08-01

    The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.

  14. 一类随机非线性关联大系统的全局指数稳定性%Global Exponential Stability for a Class of Stochastic Nonlinear Interconnected Large-Scale Systems

    Institute of Scientific and Technical Information of China (English)

    施继忠; 张继业

    2012-01-01

    为研究车辆建模导致的随机误差对自动化公路车辆系统等关联大系统的影响,将确定性箱体理论推广到随机箱体理论,利用M-矩阵理论和随机箱体理论,构造适当的向量Lyapunov函数,通过分析相应随机微分不等式的稳定性,利用随机大系统的系数矩阵以及与大系统关联的Lyapunov矩阵方程的解构造判定矩阵,得到该类大系统全局指数稳定性的充分性判据,即当判定矩阵为M-矩阵时,大系统是全局指数稳定的.仿真结果表明:本文算法收敛速度快,在20 s内系统状态就能达到稳定.%In order to study the effects of random errors caused by vehicle modeling on interconnected large-scale systems like the automated highway vehicle system, the deterministic theory was extended to the random case theory, and a proper vector Lyapunov function was constructed using the matrix theory and the random case theory. By analyzing the stability of stochastic differential inequalities, a coefficient matrix of the random large-scale system and the solutions of the Lyapunov matrix function interconnected with large-scale system are used to construct a judgment matrix, and then obtain the sufficiency criterion for global exponential stability of the large-scale system: when the judgment matrix is a quasi-Af-matrix, the global index of the large-scale system is stable. Simulation results show that the algorithm proposed in the paper has a rapid convergence rate, and the system can achieve stability in 20 s.

  15. Comparative efficiencies of three parallel algorithms for nonlinear implicit transient dynamic analysis

    Indian Academy of Sciences (India)

    A Rama Mohan Rao; T V S R Appa Rao; B Dattaguru

    2004-02-01

    The work reported in this paper is motivated by the need to develop portable parallel processing algorithms and codes which can run on a variety of hardware platforms without any modifications. The prime aim of the research work reported here is to test the portability of the parallel algorithms and also to study and understand the comparative efficiencies of three parallel algorithms developed for implicit time integration technique. The standard message passing interface (MPI) is used to develop parallel algorithms for computing nonlinear dynamic response of large structures employing implicit time-marching scheme. The parallel algorithms presented in this paper are developed under the broad framework of non-overlapped domain decomposition technique. Numerical studies indicate that the parallel algorithm devised employing the conventional form of Newmark time integration algorithm is faster than the predictor–corrector form. It is also accurate and highly adaptive to fine grain computations. The group implicit algorithm is found to be extremely superior in performance when compared to the other two parallel algorithms. This algorithm is better suited for large size problems on coarse grain environment as the resulting submeshes will obviously be large and thus permit larger time steps without losing accuracy.

  16. A NEW APPROACH TO THE NONLINEAR STABILITY OF PARALLEL SHEAR FLOWS

    Institute of Scientific and Technical Information of China (English)

    XU Lan-xi; HUANG Yong-nian

    2005-01-01

    Lyapunov's second method was used to study the nonlinear stability of parallel shear flows for stress-free boundaries. By introducing an energy functional, it was shown that the plane Couette and plane Poiseuille flows are conditionally and asymptotically stable for all Reynolds numbers. In particular, to two-dimensional perturbations, by defining new energy functionals the unconditional stability of the basic flows was proved.

  17. Parallel implementation of linear and nonlinear spectral unmixing of remotely sensed hyperspectral images

    Science.gov (United States)

    Plaza, Antonio; Plaza, Javier

    2011-11-01

    Hyperspectral unmixing is a very important task for remotely sensed hyperspectral data exploitation. It addresses the (possibly) mixed nature of pixels collected by instruments for Earth observation, which are due to several phenomena including limited spatial resolution, presence of mixing effects at different scales, etc. Spectral unmixing involves the separation of a mixed pixel spectrum into its pure component spectra (called endmembers) and the estimation of the proportion (abundance) of endmember in the pixel. Two models have been widely used in the literature in order to address the mixture problem in hyperspectral data. The linear model assumes that the endmember substances are sitting side-by-side within the field of view of the imaging instrument. On the other hand, the nonlinear mixture model assumes nonlinear interactions between endmember substances. Both techniques can be computationally expensive, in particular, for high-dimensional hyperspectral data sets. In this paper, we develop and compare parallel implementations of linear and nonlinear unmixing techniques for remotely sensed hyperspectral data. For the linear model, we adopt a parallel unsupervised processing chain made up of two steps: i) identification of pure spectral materials or endmembers, and ii) estimation of the abundance of each endmember in each pixel of the scene. For the nonlinear model, we adopt a supervised procedure based on the training of a parallel multi-layer perceptron neural network using intelligently selected training samples also derived in parallel fashion. The compared techniques are experimentally validated using hyperspectral data collected at different altitudes over a so-called Dehesa (semi-arid environment) in Extremadura, Spain, and evaluated in terms of computational performance using high performance computing systems such as commodity Beowulf clusters.

  18. Weak-periodic stochastic resonance in a parallel array of static nonlinearities.

    Directory of Open Access Journals (Sweden)

    Yumei Ma

    Full Text Available This paper studies the output-input signal-to-noise ratio (SNR gain of an uncoupled parallel array of static, yet arbitrary, nonlinear elements for transmitting a weak periodic signal in additive white noise. In the small-signal limit, an explicit expression for the SNR gain is derived. It serves to prove that the SNR gain is always a monotonically increasing function of the array size for any given nonlinearity and noisy environment. It also determines the SNR gain maximized by the locally optimal nonlinearity as the upper bound of the SNR gain achieved by an array of static nonlinear elements. With locally optimal nonlinearity, it is demonstrated that stochastic resonance cannot occur, i.e. adding internal noise into the array never improves the SNR gain. However, in an array of suboptimal but easily implemented threshold nonlinearities, we show the feasibility of situations where stochastic resonance occurs, and also the possibility of the SNR gain exceeding unity for a wide range of input noise distributions.

  19. Large-scale tides in general relativity

    Science.gov (United States)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  20. Modified Augmented Lagrange Multiplier Methods for Large-Scale Chemical Process Optimization

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Chemical process optimization can be described as large-scale nonlinear constrained minimization. The modified augmented Lagrange multiplier methods (MALMM) for large-scale nonlinear constrained minimization are studied in this paper. The Lagrange function contains the penalty terms on equality and inequality constraints and the methods can be applied to solve a series of bound constrained sub-problems instead of a series of unconstrained sub-problems. The steps of the methods are examined in full detail. Numerical experiments are made for a variety of problems, from small to very large-scale, which show the stability and effectiveness of the methods in large-scale problems.

  1. A relativistic signature in large-scale structure

    Science.gov (United States)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  2. Massively parallel-in-space-time, adaptive finite element framework for non-linear parabolic equations

    CERN Document Server

    Dyja, Robert; van der Zee, Kristoffer G

    2016-01-01

    We present an adaptive methodology for the solution of (linear and) non-linear time dependent problems that is especially tailored for massively parallel computations. The basic concept is to solve for large blocks of space-time unknowns instead of marching sequentially in time. The methodology is a combination of a computationally efficient implementation of a parallel-in-space-time finite element solver coupled with a posteriori space-time error estimates and a parallel mesh generator. This methodology enables, in principle, simultaneous adaptivity in both space and time (within the block) domains. We explore this basic concept in the context of a variety of time-steppers including $\\Theta$-schemes and Backward Differentiate Formulas. We specifically illustrate this framework with applications involving time dependent linear, quasi-linear and semi-linear diffusion equations. We focus on investigating how the coupled space-time refinement indicators for this class of problems affect spatial adaptivity. Final...

  3. Internet-based parallel programming environments for large-scale computing 0 resources sharing%面向互联网计算资源共享的并行程序设计环境

    Institute of Scientific and Technical Information of China (English)

    袁静珍

    2010-01-01

    构建了一个面向互联网计算资源共享的并行程序设计环境IPPE(internet-based parallel programming environment).该环境使用Java语言开发,通过利用Java运行系统与Java并行通信类库,在IPPE环境下可以书写具有并行处理能力的Java应用程序.IPPE具有平台独立性、容易使用、负载均衡性、容错性等特点.IPPE环境的平台独立性与易用性得益于其基于Java的字节码技术与对象序列化技术,IPPE的负载均衡性得益于针对任务的自适应并行调度算法及子任务级的容错策略.通过运行两个典型的BanchMark并行程序,表明了IPPE环境的高效性与稳定性.

  4. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  5. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  8. Nonlinear Elastodynamic Behaviour Analysis of High-Speed Spatial Parallel Coordinate Measuring Machines

    Directory of Open Access Journals (Sweden)

    Xiulong Chen

    2012-10-01

    Full Text Available In order to study the elastodynamic behaviour of 4‐ universal joints‐ prismatic pairs‐ spherical joints / universal joints‐ prismatic pairs‐ universal joints 4‐UPS‐UPU high‐speed spatial PCMMs(parallel coordinate measuring machines, the nonlinear time‐varying dynamics model, which comprehensively considers geometric nonlinearity and the rigid‐flexible coupling effect, is derived by using Lagrange equations and finite element methods. Based on the Newmark method, the kinematics output response of 4‐UPS‐UPU PCMMs is illustrated through numerical simulation. The results of the simulation show that the flexibility of the links is demonstrated to have a significant impact on the system dynamics response. This research can provide the important theoretical base of the optimization design and vibration control for 4‐UPS‐UPU PCMMs.

  9. Nonlinear Memory Capacity of Parallel Time-Delay Reservoir Computers in the Processing of Multidimensional Signals.

    Science.gov (United States)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2016-07-01

    This letter addresses the reservoir design problem in the context of delay-based reservoir computers for multidimensional input signals, parallel architectures, and real-time multitasking. First, an approximating reservoir model is presented in those frameworks that provides an explicit functional link between the reservoir architecture and its performance in the execution of a specific task. Second, the inference properties of the ridge regression estimator in the multivariate context are used to assess the impact of finite sample training on the decrease of the reservoir capacity. Finally, an empirical study is conducted that shows the adequacy of the theoretical results with the empirical performances exhibited by various reservoir architectures in the execution of several nonlinear tasks with multidimensional inputs. Our results confirm the robustness properties of the parallel reservoir architecture with respect to task misspecification and parameter choice already documented in the literature.

  10. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  11. A massively parallel GPU-accelerated model for analysis of fully nonlinear free surface waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Madsen, Morten G.; Glimberg, Stefan Lemvig

    2011-01-01

    -throughput co-processors to the CPU. We describe and demonstrate how this approach makes it possible to do fast desktop computations for large nonlinear wave problems in numerical wave tanks (NWTs) with close to 50/100 million total grid points in double/ single precision with 4 GB global device memory...... space dimensions and is useful for fast analysis and prediction purposes in coastal and offshore engineering. A dedicated numerical model based on the proposed algorithm is executed in parallel by utilizing affordable modern special purpose graphics processing unit (GPU). The model is based on a low...

  12. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  13. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang

    2014-01-01

    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  14. 非线性大系统的跟踪具有不同幅值轨线的分散型迭代学习控制器%Decentralized Iterative Learning Controllers for Nonlinear Large-scale Systems to Track Trajectories with Different Magnitudes

    Institute of Scientific and Technical Information of China (English)

    阮小娥; 陈凤敏; 万百五

    2008-01-01

    In hierarchical steady-state optimization programming for large-scale industrial processes, a feasible technique is to use information of the real system so as to modify the model-based optimum. In this circumstance, a sequence of step function-type control decisions with distinct magnitudes is computed, by which the real system is stimulated consecutively. In this paper, a set of iterative learning controllers is embedded into the procedure of hierarchical steady-state optimization in decentralized mode for a class of large-scale nonlinear industrial processes. The controller for each subsystem is used to generate a sequence of upgraded control signals so as to take responsibilities of the sequential step control decisions with distinct scales. The aim of the learning control design is to consecutively refine the transient performance of the system. By means of the Hausdorff-Young inequality of convolution integral, the convergence of the updating rule is analyzed in the sense of Lebesgne-p norm. Invention of the nonlinearity and the interaction on convergence are discussed. Validity and effectiveness of the proposed control scheme are manifested by some simulations.

  15. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  16. Debugging and Analysis of Large-Scale Parallel Programs

    Science.gov (United States)

    1989-09-01

    vtT in ti, M maps a2 to v’ in t,,n, and j < k. 3. a, and a2 both access the same memory location m, M maps both a, and a 2 to vjm in tin, a, is an...lock is released. In addition to being independent of a particular protocol, our synchronization trac ing technique does not rely on a particular

  17. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  18. Large scale three-dimensional topology optimisation of heat sinks cooled by natural convection

    CERN Document Server

    Alexandersen, Joe; Aage, Niels

    2015-01-01

    This work presents the application of density-based topology optimisation to the design of three-dimensional heat sinks cooled by natural convection. The governing equations are the steady-state incompressible Navier-Stokes equations coupled to the thermal convection-diffusion equation through the Bousinessq approximation. The fully coupled non-linear multiphysics system is solved using stabilised trilinear equal-order finite elements in a parallel framework allowing for the optimisation of large scale problems with order of 40-330 million state degrees of freedom. The flow is assumed to be laminar and several optimised designs are presented for Grashof numbers between $10^3$ and $10^6$. Interestingly, it is observed that the number of branches in the optimised design increases with increasing Grashof numbers, which is opposite to two-dimensional optimised designs.

  19. Large Scale Metal Additive Techniques Review

    Energy Technology Data Exchange (ETDEWEB)

    Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  20. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete

  1. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete

  2. Nonlinear and parallel algorithms for finite element discretizations of the incompressible Navier-Stokes equations

    Science.gov (United States)

    Arteaga, Santiago Egido

    1998-12-01

    The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the

  3. 大型风力机恒功率桨距非线性PID控制方法研究%RESEARCH ON BLADE PITCH NONLINEAR PID CONTROL FOR A LARGE-SCALE WIND TURBINE UNDER CONSTANT POWER

    Institute of Scientific and Technical Information of China (English)

    郑黎明; 林宇; 陈严; 吴捷

    2012-01-01

    建立了基于功率/桨距敏感性的变桨系统模型,综合出桨距非线性PID控制方法.通过Bladed软件计算不同桨距角下功率变化的敏感性,在Bladed外部控制器条件下,获得该方法和固有方法下系统输出的对照结果并进行分析和总结.%A new pitch regulation model based on power-pitch sensitivity was derived, and pitch nonlinear PID control was designed. Sensitivity of power variation under different pitch angle were figured out by mean of Bladed. System control results under Bladed external controllers of renovated and conventional approach were compared and analyzed, which is benefit to wind turbine control design.

  4. The new discussion of a neutrino mass and issues in the formation of large-scale structure

    Science.gov (United States)

    Melott, Adrian L.

    1991-01-01

    It is argued that the discrepancy between the large-scale structure predicted by cosmological models with neutrino mass (hot dark matter) do not differ drastically from the observed structure. Evidence from the correlation amplitude, nonlinearity and the onset of galaxy formation, large-scale streaming velocities, and the topology of large-scale structure is considered. Hot dark matter models seem to be as accurate predictors of the large-scale structure as are cold dark matter models.

  5. Constraining cosmological ultra-large scale structure using numerical relativity

    CERN Document Server

    Braden, Jonathan; Peiris, Hiranya V; Aguirre, Anthony

    2016-01-01

    Cosmic inflation, a period of accelerated expansion in the early universe, can give rise to large amplitude ultra-large scale inhomogeneities on distance scales comparable to or larger than the observable universe. The cosmic microwave background (CMB) anisotropy on the largest angular scales is sensitive to such inhomogeneities and can be used to constrain the presence of ultra-large scale structure (ULSS). We numerically evolve nonlinear inhomogeneities present at the beginning of inflation in full General Relativity to assess the CMB quadrupole constraint on the amplitude of the initial fluctuations and the size of the observable universe relative to a length scale characterizing the ULSS. To obtain a statistically significant number of simulations, we adopt a toy model in which inhomogeneities are injected along a preferred direction. We compute the likelihood function for the CMB quadrupole including both ULSS and the standard quantum fluctuations produced during inflation. We compute the posterior given...

  6. Imaging HF-induced large-scale irregularities above HAARP

    Science.gov (United States)

    Djuth, Frank T.; Reinisch, Bodo W.; Kitrosser, David F.; Elder, John H.; Snyder, A. Lee; Sales, Gary S.

    2006-02-01

    The University of Massachusetts-Lowell digisonde is used with the HAARP high-frequency (HF), ionospheric modification facility to obtain radio images of artificially-produced, large-scale, geomagnetic field-aligned irregularities. F region irregularities generated with the HAARP beam pointed in the vertical and geomagnetic field-aligned directions are examined in a smooth background plasma. It is found that limited large-scale irregularity production takes place with vertical transmissions, whereas there is a dramatic increase in the number of source irregularities with the beam pointed parallel to the geomagnetic field. Strong irregularity production appears to be confined to within ~5° of the geomagnetic zenith and does not fill the volume occupied by the HF beam. A similar effect is observed in optical images of artificial airglow.

  7. In the fast lane: large-scale bacterial genome engineering.

    Science.gov (United States)

    Fehér, Tamás; Burland, Valerie; Pósfai, György

    2012-07-31

    The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering.

  8. Alignments of Dark Matter Halos with Large-scale Tidal Fields: Mass and Redshift Dependence

    Science.gov (United States)

    Chen, Sijie; Wang, Huiyuan; Mo, H. J.; Shi, Jingjing

    2016-07-01

    Large-scale tidal fields estimated directly from the distribution of dark matter halos are used to investigate how halo shapes and spin vectors are aligned with the cosmic web. The major, intermediate, and minor axes of halos are aligned with the corresponding tidal axes, and halo spin axes tend to be parallel with the intermediate axes and perpendicular to the major axes of the tidal field. The strengths of these alignments generally increase with halo mass and redshift, but the dependence is only on the peak height, ν \\equiv {δ }{{c}}/σ ({M}{{h}},z). The scaling relations of the alignment strengths with the value of ν indicate that the alignment strengths remain roughly constant when the structures within which the halos reside are still in a quasi-linear regime, but decreases as nonlinear evolution becomes more important. We also calculate the alignments in projection so that our results can be compared directly with observations. Finally, we investigate the alignments of tidal tensors on large scales, and use the results to understand alignments of halo pairs separated at various distances. Our results suggest that the coherent structure of the tidal field is the underlying reason for the alignments of halos and galaxies seen in numerical simulations and in observations.

  9. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  10. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  11. Large-scale Complex IT Systems

    CERN Document Server

    Sommerville, Ian; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.

  12. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  13. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  14. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  15. On the Nonlinear Stability of Plane Parallel Shear Flow in a Coplanar Magnetic Field

    Science.gov (United States)

    Xu, Lanxi; Lan, Wanli

    2016-10-01

    Lyapunov direct method has been used to study the nonlinear stability of laminar flow between two parallel planes in the presence of a coplanar magnetic field for streamwise perturbations with stress-free boundary planes. Two Lyapunov functions are defined. By means of the first, it is proved that the transverse components of the perturbations decay unconditionally and asymptotically to zero for all Reynolds numbers and magnetic Reynolds numbers. By means of the second, it is showed that the other components of the perturbations decay conditionally and exponentially to zero for all Reynolds numbers and the magnetic Reynolds numbers below π ^2/2M , where M is the maximum of the absolute value of the velocity field of the laminar flow.

  16. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  17. A cooperative strategy for parameter estimation in large scale systems biology models

    Directory of Open Access Journals (Sweden)

    Villaverde Alejandro F

    2012-06-01

    Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here

  18. Finding zeros of nonlinear functions using the hybrid parallel cell mapping method

    Science.gov (United States)

    Xiong, Fu-Rui; Schütze, Oliver; Ding, Qian; Sun, Jian-Qiao

    2016-05-01

    Analysis of nonlinear dynamical systems including finding equilibrium states and stability boundaries often leads to a problem of finding zeros of vector functions. However, finding all the zeros of a set of vector functions in the domain of interest is quite a challenging task. This paper proposes a zero finding algorithm that combines the cell mapping methods and the subdivision techniques. Both the simple cell mapping (SCM) and generalized cell mapping (GCM) methods are used to identify a covering set of zeros. The subdivision technique is applied to enhance the solution resolution. The parallel implementation of the proposed method is discussed extensively. Several examples are presented to demonstrate the application and effectiveness of the proposed method. We then extend the study of finding zeros to the problem of finding stability boundaries of potential fields. Examples of two and three dimensional potential fields are studied. In addition to the effectiveness in finding the stability boundaries, the proposed method can handle several millions of cells in just a few seconds with the help of parallel computing in graphics processing units (GPUs).

  19. Three-Dimensional Induced Polarization Parallel Inversion Using Nonlinear Conjugate Gradients Method

    Directory of Open Access Journals (Sweden)

    Huan Ma

    2015-01-01

    Full Text Available Four kinds of array of induced polarization (IP methods (surface, borehole-surface, surface-borehole, and borehole-borehole are widely used in resource exploration. However, due to the presence of large amounts of the sources, it will take much time to complete the inversion. In the paper, a new parallel algorithm is described which uses message passing interface (MPI and graphics processing unit (GPU to accelerate 3D inversion of these four methods. The forward finite differential equation is solved by ILU0 preconditioner and the conjugate gradient (CG solver. The inverse problem is solved by nonlinear conjugate gradients (NLCG iteration which is used to calculate one forward and two “pseudo-forward” modelings and update the direction, space, and model in turn. Because each source is independent in forward and “pseudo-forward” modelings, multiprocess modes are opened by calling MPI library. The iterative matrix solver within CULA is called in each process. Some tables and synthetic data examples illustrate that this parallel inversion algorithm is effective. Furthermore, we demonstrate that the joint inversion of surface and borehole data produces resistivity and chargeability results are superior to those obtained from inversions of individual surface data.

  20. User documentation for KINSOL, a nonlinear solver for sequential and parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, A. G., LLNL

    1998-07-01

    KINSOL is a general purpose nonlinear system solver callable from either C or Fortran programs It is based on NKSOL [3], but is written in ANSI-standard C rather than Fortran77 Its most notable feature is that it uses Krylov Inexact Newton techniques in the system`s approximate solution, thus sharing significant modules previously written within CASC at LLNL to support CVODE[6, 7]/PVODE[9, 5] It also requires almost no matrix storage for solving the Newton equations as compared to direct methods The name KINSOL is derived from those techniques Krylov Inexact Newton SOLver The package was arranged so that selecting one of two forms of a single module in the compilation process will allow the entire package to be created in either sequential (serial) or parallel form The parallel version of KINSOL uses MPI (Message-Passing Interface) [8] and an appropriately revised version of the vector module NVECTOR, as mentioned above, to achieve parallelism and portability KINSOL in parallel form is intended for the SPMD (Single Program Multiple Data) model with distributed memory, in which all vectors are identically distributed across processors In particular, the vector module NVECTOR is designed to help the user assign a contiguous segment of a given vector to each of the processors for parallel computation Several primitives were added to NVECTOR as originally written for PVODE to implement KINSOL KINSOL has been run on a Cray-T3D, an eight- processor DEC ALPHA and a cluster of workstations It is currently being used in a simulation of tokamak edge plasmas and in groundwater two-phase flow studies at LLNL The remainder of this paper is organized as follows Section 2 sets the mathematical notation and summarizes the basic methods Section 3 summarizes the organization of the KINSOL solver, while Section 4 summarizes its usage Section 5 describes a preconditioner module, Section 6 describes a set of Fortran/C interfaces, Section 7 describes an example problem, and Section 8

  1. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  2. Evaluating Large-Scale Interactive Radio Programmes

    Science.gov (United States)

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  3. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  4. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  5. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  6. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont

  7. Ensemble methods for large scale inverse problems

    NARCIS (Netherlands)

    Heemink, A.W.; Umer Altaf, M.; Barbu, A.L.; Verlaan, M.

    2013-01-01

    Variational data assimilation, also sometimes simply called the ‘adjoint method’, is used very often for large scale model calibration problems. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between t

  8. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  9. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  10. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  11. Petascale computations for Large-scale Atomic and Molecular collisions

    CERN Document Server

    McLaughlin, Brendan M

    2014-01-01

    Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schroedinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. Various examples are shown of our theoretical results compared with those obtained from Synchrotron Radiation facilities and from Satellite observations. We also indicate future directions and implementation of the R-matrix codes on emerging GPU architectures.

  12. Reliability assessment for components of large scale photovoltaic systems

    Science.gov (United States)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  13. Equivalent common path method in large-scale laser comparator

    Science.gov (United States)

    He, Mingzhao; Li, Jianshuang; Miao, Dongjing

    2015-02-01

    Large-scale laser comparator is main standard device that providing accurate, reliable and traceable measurements for high precision large-scale line and 3D measurement instruments. It mainly composed of guide rail, motion control system, environmental parameters monitoring system and displacement measurement system. In the laser comparator, the main error sources are temperature distribution, straightness of guide rail and pitch and yaw of measuring carriage. To minimize the measurement uncertainty, an equivalent common optical path scheme is proposed and implemented. Three laser interferometers are adjusted to parallel with the guide rail. The displacement in an arbitrary virtual optical path is calculated using three displacements without the knowledge of carriage orientations at start and end positions. The orientation of air floating carriage is calculated with displacements of three optical path and position of three retroreflectors which are precisely measured by Laser Tracker. A 4th laser interferometer is used in the virtual optical path as reference to verify this compensation method. This paper analyzes the effect of rail straightness on the displacement measurement. The proposed method, through experimental verification, can improve the measurement uncertainty of large-scale laser comparator.

  14. Spatial solitons in photonic lattices with large-scale defects

    Institute of Scientific and Technical Information of China (English)

    Yang Xiao-Yu; Zheng Jiang-Bo; Dong Liang-Wei

    2011-01-01

    We address the existence, stability and propagation dynamics of solitons supported by large-scale defects surrounded by the harmonic photonic lattices imprinted in the defocusing saturable nonlinear medium. Several families of soliton solutions, including flat-topped, dipole-like, and multipole-like solitons, can be supported by the defected lattices with different heights of defects. The width of existence domain of solitons is determined solely by the saturable parameter. The existence domains of various types of solitons can be shifted by the variations of defect size, lattice depth and soliton order. Solitons in the model are stable in a wide parameter window, provided that the propagation constant exceeds a critical value, which is in sharp contrast to the case where the soliton trains is supported by periodic lattices imprinted in defocusing saturable nonlinear medium. We also find stable solitons in the semi-infinite gap which rarely occur in the defocusing media.

  15. Near optimal bispectrum estimators for large-scale structure

    CERN Document Server

    Schmittfull, Marcel; Seljak, Uroš

    2014-01-01

    Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias $b_1$ and the amplitude of fluctuations $\\sigma_8$. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms $b_2$ and $b_{s^2}$, respectively, while the shift term has none in the absence of velocity bias (valid in the $k \\rightarrow 0$ limit). Thus the linear bias $b_1$ is best determined by the shift cross-spec...

  16. Multivariate Clustering of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  17. The large-scale structure of vacuum

    CERN Document Server

    Albareti, F D; Maroto, A L

    2014-01-01

    The vacuum state in quantum field theory is known to exhibit an important number of fundamental physical features. In this work we explore the possibility that this state could also present a non-trivial space-time structure on large scales. In particular, we will show that by imposing the renormalized vacuum energy-momentum tensor to be conserved and compatible with cosmological observations, the vacuum energy of sufficiently heavy fields behaves at late times as non-relativistic matter rather than as a cosmological constant. In this limit, the vacuum state supports perturbations whose speed of sound is negligible and accordingly allows the growth of structures in the vacuum energy itself. This large-scale structure of vacuum could seed the formation of galaxies and clusters very much in the same way as cold dark matter does.

  18. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...... limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers...

  19. Quantum Signature of Cosmological Large Scale Structures

    CERN Document Server

    Capozziello, S; De Siena, S; Illuminati, F; Capozziello, Salvatore; Martino, Salvatore De; Siena, Silvio De; Illuminati, Fabrizio

    1998-01-01

    We demonstrate that to all large scale cosmological structures where gravitation is the only overall relevant interaction assembling the system (e.g. galaxies), there is associated a characteristic unit of action per particle whose order of magnitude coincides with the Planck action constant $h$. This result extends the class of physical systems for which quantum coherence can act on macroscopic scales (as e.g. in superconductivity) and agrees with the absence of screening mechanisms for the gravitational forces, as predicted by some renormalizable quantum field theories of gravity. It also seems to support those lines of thought invoking that large scale structures in the Universe should be connected to quantum primordial perturbations as requested by inflation, that the Newton constant should vary with time and distance and, finally, that gravity should be considered as an effective interaction induced by quantization.

  20. Process Principles for Large-Scale Nanomanufacturing.

    Science.gov (United States)

    Behrens, Sven H; Breedveld, Victor; Mujica, Maritza; Filler, Michael A

    2017-06-07

    Nanomanufacturing-the fabrication of macroscopic products from well-defined nanoscale building blocks-in a truly scalable and versatile manner is still far from our current reality. Here, we describe the barriers to large-scale nanomanufacturing and identify routes to overcome them. We argue for nanomanufacturing systems consisting of an iterative sequence of synthesis/assembly and separation/sorting unit operations, analogous to those used in chemicals manufacturing. In addition to performance and economic considerations, phenomena unique to the nanoscale must guide the design of each unit operation and the overall process flow. We identify and discuss four key nanomanufacturing process design needs: (a) appropriately selected process break points, (b) synthesis techniques appropriate for large-scale manufacturing, (c) new structure- and property-based separations, and (d) advances in stabilization and packaging.

  1. Condition Monitoring of Large-Scale Facilities

    Science.gov (United States)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  2. Wireless Secrecy in Large-Scale Networks

    CERN Document Server

    Pinto, Pedro C; Win, Moe Z

    2011-01-01

    The ability to exchange secret information is critical to many commercial, governmental, and military networks. The intrinsically secure communications graph (iS-graph) is a random graph which describes the connections that can be securely established over a large-scale network, by exploiting the physical properties of the wireless medium. This paper provides an overview of the main properties of this new class of random graphs. We first analyze the local properties of the iS-graph, namely the degree distributions and their dependence on fading, target secrecy rate, and eavesdropper collusion. To mitigate the effect of the eavesdroppers, we propose two techniques that improve secure connectivity. Then, we analyze the global properties of the iS-graph, namely percolation on the infinite plane, and full connectivity on a finite region. These results help clarify how the presence of eavesdroppers can compromise secure communication in a large-scale network.

  3. Measuring Bulk Flows in Large Scale Surveys

    CERN Document Server

    Feldman, H A; Feldman, Hume A.; Watkins, Richard

    1993-01-01

    We follow a formalism presented by Kaiser to calculate the variance of bulk flows in large scale surveys. We apply the formalism to a mock survey of Abell clusters \\'a la Lauer \\& Postman and find the variance in the expected bulk velocities in a universe with CDM, MDM and IRAS--QDOT power spectra. We calculate the velocity variance as a function of the 1--D velocity dispersion of the clusters and the size of the survey.

  4. Statistical characteristics of Large Scale Structure

    OpenAIRE

    Demianski; Doroshkevich

    2002-01-01

    We investigate the mass functions of different elements of the Large Scale Structure -- walls, pancakes, filaments and clouds -- and the impact of transverse motions -- expansion and/or compression -- on their statistical characteristics. Using the Zel'dovich theory of gravitational instability we show that the mass functions of all structure elements are approximately the same and the mass of all elements is found to be concentrated near the corresponding mean mass. At high redshifts, both t...

  5. Topologies for large scale photovoltaic power plants

    OpenAIRE

    Cabrera Tobar, Ana; Bullich Massagué, Eduard; Aragüés Peñalba, Mònica; Gomis Bellmunt, Oriol

    2016-01-01

    © 2016 Elsevier Ltd. All rights reserved. The concern of increasing renewable energy penetration into the grid together with the reduction of prices of photovoltaic solar panels during the last decade have enabled the development of large scale solar power plants connected to the medium and high voltage grid. Photovoltaic generation components, the internal layout and the ac collection grid are being investigated for ensuring the best design, operation and control of these power plants. This ...

  6. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  7. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  8. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  9. Computational Advances in Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1981-09-01

    collected from a variety of sources. The first six problems are listed in Himmelblau [Ref. 26: pp. 395-425] and the original source author/developer for...Problem 1 ( Himmelblau 6) This problem [Ref. 303] is an example of determining the chemical composition of a complex mixture under conditions of chemical... Himmelblau 4A) This is also a chemical equilibrium problem which had been redefined in the Himmelblau study from a problem originally formulated and

  10. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    Science.gov (United States)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-10-01

    We study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x-y) averaging, we also demonstrate the presence of large-scale fields when vertical (y-z) averaging is employed instead. By computing space-time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase - a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumber fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode-mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.

  11. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  12. Large-Scale PV Integration Study

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  13. Conformal Anomaly and Large Scale Gravitational Coupling

    CERN Document Server

    Salehi, H

    2000-01-01

    We present a model in which the breackdown of conformal symmetry of a quantum stress-tensor due to the trace anomaly is related to a cosmological effect in a gravitational model. This is done by characterizing the traceless part of the quantum stress-tensor in terms of the stress-tensor of a conformal invariant classical scalar field. We introduce a conformal frame in which the anomalous trace is identified with a cosmological constant. In this conformal frame we establish the Einstein field equations by connecting the quantum stress-tensor with the large scale distribution of matter in the universe.

  14. Large Scale Quantum Simulations of Nuclear Pasta

    Science.gov (United States)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  15. Large scale wind power penetration in Denmark

    DEFF Research Database (Denmark)

    Karnøe, Peter

    2013-01-01

    he Danish electricity generating system prepared to adopt nuclear power in the 1970s, yet has become the world's front runner in wind power with a national plan for 50% wind power penetration by 2020. This paper deploys a sociotechnical perspective to explain the historical transformation of "net...... expertise evolves and contributes to the normalization and large-scale penetration of wind power in the electricity generating system. The analysis teaches us how technological paths become locked-in, but also indicates keys for locking them out....

  16. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  17. What is a large-scale dynamo?

    Science.gov (United States)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  18. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  19. Hiearchical Engine for Large Scale Infrastructure Simulation

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-15

    HELICS ls a new open-source, cyber-physlcal-energy co-simulation framework for electric power systems. HELICS Is designed to support very-large-scale (100,000+ federates) co­simulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features Include cross platform operating system support, the integration of both eventdrlven (e.g., packetlzed communication) and time-series (e.g.,power flow) simulations, and the ability to co-Iterate among federates to ensure physical model convergence at each time step.

  20. Colloquium: Large scale simulations on GPU clusters

    Science.gov (United States)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  1. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  2. Large-scale Globally Propagating Coronal Waves

    Directory of Open Access Journals (Sweden)

    Alexander Warmuth

    2015-09-01

    Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  3. Making Predictions using Large Scale Gaussian Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — One of the key problems that arises in many areas is to estimate a potentially nonlinear function [tex] G(x, theta)[/tex] given input and output samples tex [/tex]...

  4. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  5. A Parallel Multi-Domain Solution Methodology Applied to Nonlinear Thermal Transport Problems in Nuclear Fuel Pins

    CERN Document Server

    Philip, Bobby; Allu, Srikanth; Hamilton, Steven P; Sampath, Rahul S; Clarno, Kevin T; Dilts, Gary A

    2014-01-01

    This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods...

  6. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    Energy Technology Data Exchange (ETDEWEB)

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  7. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  8. Simulating the Large-Scale Structure of HI Intensity Maps

    CERN Document Server

    Seehars, Sebastian; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2015-01-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations, the halo model, and a phenomenological prescription for assigning HI mass to halos. The simulations span a redshift range of 0.35 < z < 0.9 in redshift bins of width $\\Delta z \\approx 0.05$ and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects on the angular clustering of HI. We apply and compare several estimators for the angular power spectrum and its covariance. We verify that they agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  9. Simulating the large-scale structure of HI intensity maps

    Science.gov (United States)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  10. Large-scale stabilization control of input-constrained quadrotor

    Directory of Open Access Journals (Sweden)

    Jun Jiang

    2016-10-01

    Full Text Available The quadrotor has been the most popular aircraft in the last decade due to its excellent dynamics and continues to attract ever-increasing research interest. Delivering a quadrotor from a large fixed-wing aircraft is a promising application of quadrotors. In such an application, the quadrotor needs to switch from a highly unstable status, featured as large initial states, to a safe and stable flight status. This is the so-called large-scale stability control problem. In such an extreme scenario, the quadrotor is at risk of actuator saturation. This can cause the controller to update incorrectly and lead the quadrotor to spiral and crash. In this article, to safely control the quadrotor in such scenarios, the control input constraint is analyzed. The key states of a quadrotor dynamic model are selected, and a two-dimensional dynamic model is extracted based on a symmetrical body configuration. A generalized point-wise min-norm nonlinear control method is proposed based on the Lyapunov function, and large-scale stability control is hence achieved. An enhanced point-wise, min-norm control is further provided to improve the attitude control performance, with altitude performance degenerating slightly. Simulation results showed that the proposed control methods can stabilize the input-constrained quadrotor and the enhanced method can improve the performance of the quadrotor in critical states.

  11. Large-scale direct shear testing of geocell reinforced soil

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The tests on the shear property of geocell reinforced soils were carried out by using large-scale direct shear equipment with shear-box-dimensions of 500 mm×500 mm×400 mm (length×width×height).Three types of specimens,silty gravel soil,geoceli reinforced silty gravel soil and geoceli reinforood cement stabilizing silty gravel soil were used to investigate the shear stress-displacement behavior,the shear strength and the strengthening mechanism of geocell reinforced soils.The comparisons of large-scale shear test with triaxial compression test for the same type of soil were conducted to evaluate the influences of testing method on the shear strength as well.The test results show that the unreinforced soil and geocell reinforced soil give similar nonlinear features on the behavior of shear stress and displacement.The geocell reinforced cement stabilizing soil has a quasi-elastic characteristic in the case of normal stress coming up to 1.0 GPa.The tests with the reinforcement of geocell result in an increase of 244% in cohesion,and the tests with the geocell and the cement stabilization result in an increase of 10 times in cohesion compared with the unreinforced soil.The friction angle does not change markedly.The geocell reinforcement develops a large amount of cohesion on the shear strength of soils.

  12. Parallel programming characteristics of a DSP-based parallel system

    Institute of Scientific and Technical Information of China (English)

    GAO Shu; GUO Qing-ping

    2006-01-01

    This paper firstly introduces the structure and working principle of DSP-based parallel system, parallel accelerating board and SHARC DSP chip. Then it pays attention to investigating the system's programming characteristics, especially the mode of communication, discussing how to design parallel algorithms and presenting a domain-decomposition-based complete multi-grid parallel algorithm with virtual boundary forecast (VBF) to solve a lot of large-scale and complicated heat problems. In the end, Mandelbrot Set and a non-linear heat transfer equation of ceramic/metal composite material are taken as examples to illustrate the implementation of the proposed algorithm. The results showed that the solutions are highly efficient and have linear speedup.

  13. Large-scale structure non-Gaussianities with modal methods

    Science.gov (United States)

    Schmittfull, Marcel

    2016-10-01

    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  14. Statistics of Caustics in Large-Scale Structure Formation

    Science.gov (United States)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  15. Statistics of Caustics in Large-Scale Structure Formation

    CERN Document Server

    Feldbrugge, Job; van de Weygaert, Rien

    2014-01-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zeldovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  16. Modeling The Large Scale Bias of Neutral Hydrogen

    CERN Document Server

    Marin, Felipe; Seo, Hee-Jong; Vallinotto, Alberto

    2009-01-01

    We present analytical estimates of the large scale bias of neutral Hydrogen (HI) based on the Halo Occupation Distribution formalism. We use a simple, non-parametric model which monotonically relates the total mass of a halo with its HI mass at zero redshift; for earlier times we assume limiting models for the HI density parameter evolution, consistent with the data presently available, as well as two main scenarios for the evolution of our HI mass - Halo mass relation. We find that both the linear and the first non-linear bias terms exhibit a remarkable evolution with redshift, regardless of the specific limiting model assumed for the HI evolution. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the HI Power Spectrum.

  17. Large scale instabilities in two-dimensional magnetohydrodynamics

    Science.gov (United States)

    Boffetta; Celani; Prandi

    2000-04-01

    The stability of a sheared magnetic field is analyzed in two-dimensional magnetohydrodynamics with resistive and viscous dissipation. Using a multiple-scale analysis, it is shown that at large enough Reynolds numbers the basic state describing a motionless fluid and a layered magnetic field, becomes unstable with respect to large scale perturbations. The exact expressions for eddy-viscosity and eddy-resistivity are derived in the nearby of the critical point where the instability sets in. In this marginally unstable case the nonlinear phase of perturbation growth obeys to a Cahn-Hilliard-like dynamics characterized by coalescence of magnetic islands leading to a final new equilibrium state. High resolution numerical simulations confirm quantitatively the predictions of multiscale analysis.

  18. The dynamics of large-scale arrays of coupled resonators

    Science.gov (United States)

    Borra, Chaitanya; Pyles, Conor S.; Wetherton, Blake A.; Quinn, D. Dane; Rhoads, Jeffrey F.

    2017-03-01

    This work describes an analytical framework suitable for the analysis of large-scale arrays of coupled resonators, including those which feature amplitude and phase dynamics, inherent element-level parameter variation, nonlinearity, and/or noise. In particular, this analysis allows for the consideration of coupled systems in which the number of individual resonators is large, extending as far as the continuum limit corresponding to an infinite number of resonators. Moreover, this framework permits analytical predictions for the amplitude and phase dynamics of such systems. The utility of this analytical methodology is explored through the analysis of a system of N non-identical resonators with global coupling, including both reactive and dissipative components, physically motivated by an electromagnetically-transduced microresonator array. In addition to the amplitude and phase dynamics, the behavior of the system as the number of resonators varies is investigated and the convergence of the discrete system to the infinite-N limit is characterized.

  19. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  20. Clumps in large scale relativistic jets

    CERN Document Server

    Tavecchio, F; Celotti, A

    2003-01-01

    The relatively intense X-ray emission from large scale (tens to hundreds kpc) jets discovered with Chandra likely implies that jets (at least in powerful quasars) are still relativistic at that distances from the active nucleus. In this case the emission is due to Compton scattering off seed photons provided by the Cosmic Microwave Background, and this on one hand permits to have magnetic fields close to equipartition with the emitting particles, and on the other hand minimizes the requirements about the total power carried by the jet. The emission comes from compact (kpc scale) knots, and we here investigate what we can predict about the possible emission between the bright knots. This is motivated by the fact that bulk relativistic motion makes Compton scattering off the CMB photons efficient even when electrons are cold or mildly relativistic in the comoving frame. This implies relatively long cooling times, dominated by adiabatic losses. Therefore the relativistically moving plasma can emit, by Compton sc...

  1. Large-scale parametric survival analysis.

    Science.gov (United States)

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  2. Curvature constraints from Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-01-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter $\\Omega_K$ with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on the spatial curvature parameter estimation. We show that constraints on the curvature para...

  3. Large-scale simulations of reionization

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Katharina; /JILA, Boulder /Fermilab; Gnedin, Nickolay Y.; /Fermilab; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  4. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  5. Large scale water lens for solar concentration.

    Science.gov (United States)

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  6. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective, the formulation of the design problem or brief becomes an integrated part of the design process. This paper...... discusses new design (education) methods based on a relational concept of urban sites and design processes. Within this logic site survey is not simply a pre-design activity nor is it a question of comprehensive analysis. Site survey is an integrated part of the design process. By means of active site...

  7. Supporting large-scale computational science

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  8. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  9. Near optimal bispectrum estimators for large-scale structure

    Science.gov (United States)

    Schmittfull, Marcel; Baldauf, Tobias; Seljak, Uroš

    2015-02-01

    Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias b1 and the amplitude of fluctuations σ8. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms b2 and bs2 , respectively, while the shift term has none in the absence of velocity bias (valid in the k →0 limit). Thus the linear bias b1 is best determined by the shift cross-spectrum, while the squared density and tidal cross-spectra mostly tighten constraints on b2 and bs2 once b1 is known. Since the form of the cross-spectra is derived from optimal maximum-likelihood estimation, they contain the full bispectrum information on bias parameters. Perturbative analytical predictions for their expectation values and covariances agree with simulations on large scales, k ≲0.09 h /Mpc at z =0.55 with Gaussian R =20 h-1 Mpc smoothing, for matter-matter-matter, and matter-matter-halo combinations. For halo-halo-halo cross-spectra the model also needs to include corrections to the Poisson stochasticity.

  10. A theoretical model for the evolution of two-dimensional large-scale coherent structures in a mixing layer

    Institute of Scientific and Technical Information of China (English)

    周恒; 马良

    1995-01-01

    By a proper combination of the modified weakly nonlinear theory of hydrodynamic stability and the energy method, the spatial evolution of the large-scale coherent structures in a mixing layer has been calculated. The results are satisfactory.

  11. Large scale simulations of the great 1906 San Francisco earthquake

    Science.gov (United States)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  12. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. (Stanford Univ., CA (United States). Dept. of Operations Research Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  13. Nonlinear coil sensitivity estimation for parallel magnetic resonance imaging using data-adaptive steering kernel regression method.

    Science.gov (United States)

    Fang, Sheng; Guo, Hua

    2013-01-01

    The parallel magnetic resonance imaging (parallel imaging) technique reduces the MR data acquisition time by using multiple receiver coils. Coil sensitivity estimation is critical for the performance of parallel imaging reconstruction. Currently, most coil sensitivity estimation methods are based on linear interpolation techniques. Such methods may result in Gibbs-ringing artifact or resolution loss, when the resolution of coil sensitivity data is limited. To solve the problem, we proposed a nonlinear coil sensitivity estimation method based on steering kernel regression, which performs a local gradient guided interpolation to the coil sensitivity. The in vivo experimental results demonstrate that this method can effectively suppress Gibbs ringing artifact in coil sensitivity and reduces both noise and residual aliasing artifact level in SENSE reconstruction.

  14. Large scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    CERN Document Server

    Bhat, Pallavi; Blackman, Eric G

    2016-01-01

    We study the dynamo generation (exponential growth) of large scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal ($x$-$y$) averaging, we demonstrate the presence of large scale fields when either horizontal or vertical ($y$-$z$) averaging is employed. By computing planar averaged fields and power spectra, we find large scale dynamo action in the early MRI growth phase---a previously unidentified feature. Fast growing horizontal low modes and fiducial vertical modes over a narrow range of wave numbers amplify these planar averaged fields in the MRI growth phase, before turbulence sets in. The large scale field growth requires linear fluctuations but not nonlinear turbulence (as defined by mode-mode coupling) and grows as a direct global mode of the MRI. Only by vertical averaging, can it be shown that the growth of horizontal low wavenumber MRI modes directly feed-back to the initial vertica...

  15. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  16. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  17. Nonlinearity for a parallel kinematic machine tool and its application to interpolation accuracy analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    [1]Carlo,I.,Direct Position Analysis of The Stewart Platform Mechanism,Mech.Mach.Theory,1990,25(6): 611-621.[2]Fitchter,E.F.,A Stewart platform based manipulator general theory and practical construction,Inter.Journal of Robtics Research,1986,(2): 157-182.[3]Raghavan,M.,The Stewart platform of general geometry has 40 configurations,ASME J.Mechanical Design,1993,115(2): 277-281.[4]Wampler,C.W.,Forward displacement analysis of general six-in-parallel SPS (Stewart) platform manipulators using SOMA Coordinates,Mech.Mach.Theory,1995,31(3): 331-337.[5]Wen,F.A.,Liang,C.G.,Displacement analysis of the 6-6 stewart platform mechanisms,Mech.Mach.Theory,1994,29(4): 547-557.[6]Liang,C.G.,Rong,H.,The forward displacement solution to a Stewart platform type maniputator,China Journal of Mechanical Engineering,1991,27(2): 26-30.[7]Huang,Z.,Spatial Mechanisms,Beijing: China Machine Press,1991.[8]Liu,K.et al.,The singularities and dynamics of a stewart platform manipulator,J.of Intelligent and Robotic Systems,1993,8: 287-308.[9]Bates,D.M.,Watts,D.G.,Relative curvature measures of nonlinearity,Journal of the Royal Statistical Society,1980(B42): 1-25.[10]Huang Tian,Wang Jinsong,Whitehouse,D.J.,Theory and methodology for kinematic design of Gough-Stewart platforms,Science in China,Series E,1999,42(4): 425-436.[11]Wang,Z.H.,Wang,J.S.,Yang,X.D.,Wei,Y.M.,Interpolation scheme and simulation study on its accuracy for a Stewart platform based CNC machine Tool VAMTIY,China Mechanical Engineering,1999,10(10): 1121-1123.[12]Wang Zhonghua,Wang Jinsong,Study on interpolation and its error distribution in workspace for a Stewart platform based CNC machine tool,in Proceedings of International Conference on Advanced Manufacturing Systems and Manufacturing Automation (AMSMA'2000),19th-2st,June,2000,Guangzhou,China.

  18. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  19. ON THE EVOLUTION OF LARGE SCALE STRUCTURES IN THREE-DIMENSIONAL MIXING LAYERS

    Institute of Scientific and Technical Information of China (English)

    罗纪生; H.E, Fiedler

    2001-01-01

    In this paper, several mathematical models for the large scale structures in some special kinds of mixing layers, which might be practically useful for enhancing the mixing, are proposed. First, the linear growth rate of the large scale structures in the mixing layers was calculated. Then, using the much improved weakly non-linear theory, combined with the energy method, the non-linear evolution of large scale structures in two special mixing layer configurations is calculated. One of the mixing layers has equal magnitudes of the upstream velocity vectors, while the angles between the velocity vectors and the trailing edge were π/2 - and π/2 + ,respectively. The other mixing layer was generated by a splitter-plate with a 45-degree-sweep trailing edge.

  20. Cold flows and large scale tides

    Science.gov (United States)

    van de Weygaert, R.; Hoffman, Y.

    1999-01-01

    Within the context of the general cosmological setting it has remained puzzling that the local Universe is a relatively cold environment, in the sense of small-scale peculiar velocities being relatively small. Indeed, it has since long figured as an important argument for the Universe having a low Ω, or if the Universe were to have a high Ω for the existence of a substantial bias between the galaxy and the matter distribution. Here we investigate the dynamical impact of neighbouring matter concentrations on local small-scale characteristics of cosmic flows. While regions where huge nearby matter clumps represent a dominating component in the local dynamics and kinematics may experience a faster collapse on behalf of the corresponding tidal influence, the latter will also slow down or even prevent a thorough mixing and virialization of the collapsing region. By means of N-body simulations starting from constrained realizations of regions of modest density surrounded by more pronounced massive structures, we have explored the extent to which the large scale tidal fields may indeed suppress the `heating' of the small-scale cosmic velocities. Amongst others we quantify the resulting cosmic flows through the cosmic Mach number. This allows us to draw conclusions about the validity of estimates of global cosmological parameters from local cosmic phenomena and the necessity to take into account the structure and distribution of mass in the local Universe.

  1. Large scale mechanical metamaterials as seismic shields

    Science.gov (United States)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  2. Large-scale autostereoscopic outdoor display

    Science.gov (United States)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  3. Management of large-scale multimedia conferencing

    Science.gov (United States)

    Cidon, Israel; Nachum, Youval

    1998-12-01

    The goal of this work is to explore management strategies and algorithms for large-scale multimedia conferencing over a communication network. Since the use of multimedia conferencing is still limited, the management of such systems has not yet been studied in depth. A well organized and human friendly multimedia conference management should utilize efficiently and fairly its limited resources as well as take into account the requirements of the conference participants. The ability of the management to enforce fair policies and to quickly take into account the participants preferences may even lead to a conference environment that is more pleasant and more effective than a similar face to face meeting. We suggest several principles for defining and solving resource sharing problems in this context. The conference resources which are addressed in this paper are the bandwidth (conference network capacity), time (participants' scheduling) and limitations of audio and visual equipment. The participants' requirements for these resources are defined and translated in terms of Quality of Service requirements and the fairness criteria.

  4. Large-scale wind turbine structures

    Science.gov (United States)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  5. Large scale probabilistic available bandwidth estimation

    CERN Document Server

    Thouin, Frederic; Rabbat, Michael

    2010-01-01

    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a pa...

  6. Gravitational redshifts from large-scale structure

    CERN Document Server

    Croft, Rupert A C

    2013-01-01

    The recent measurement of the gravitational redshifts of galaxies in galaxy clusters by Wojtak et al. has opened a new observational window on dark matter and modified gravity. By stacking clusters this determination effectively used the line of sight distortion of the cross-correlation function of massive galaxies and lower mass galaxies to estimate the gravitational redshift profile of clusters out to 4 Mpc/h. Here we use a halo model of clustering to predict the distortion due to gravitational redshifts of the cross-correlation function on scales from 1 - 100 Mpc/h. We compare our predictions to simulations and use the simulations to make mock catalogues relevant to current and future galaxy redshift surveys. Without formulating an optimal estimator, we find that the full BOSS survey should be able to detect gravitational redshifts from large-scale structure at the ~4 sigma level. Upcoming redshift surveys will greatly increase the number of galaxies useable in such studies and the BigBOSS and Euclid exper...

  7. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  8. Large-scale clustering of cosmic voids

    Science.gov (United States)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  9. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  10. Decentralized robust stabilization of discrete-time fuzzy large-scale systems with parametric uncertainties: a LMI method

    Institute of Scientific and Technical Information of China (English)

    Zhang Yougang; Xu Bugong

    2006-01-01

    Decentralized robust stabilization problem of discrete-time fuzzy large-scale systems with parametric uncertainties is considered. This uncertain fuzzy large-scale system consists of N interconnected T-S fuzzy subsystems, and the parametric uncertainties are unknown but norm-bounded. Based on Lyapunov stability theory and decentralized control theory of large-scale system, the design schema of decentralized parallel distributed compensation (DPDC) fuzzy controllers to ensure the asymptotic stability of the whole fuzzy large-scale system is proposed. The existence conditions for these controllers take the forms of LMIs. Finally a numerical simulation example is given to show the utility of the method proposed.

  11. Kinetic Alfv\\'en waves generation by large-scale phase-mixing

    CERN Document Server

    Vasconez, C L; Valentini, F; Servidio, S; Matthaeus, W H; Malara, F

    2015-01-01

    One view of the solar-wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length $d_p$ may be considered as Kinetic Alfv\\'en waves (KAWs). In the present paper, we show how phase-mixing of large-scale parallel propagating Alfv\\'en waves is an efficient mechanism for the production of KAWs at wavelengths close to $d_p$ and at large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall-Magnetohydrodynamic (HMHD), and hybrid Vlasov-Maxwell (HVM) simulations modeling the propagation of Alfv\\'en waves in inhomogeneous plasmas are performed. In linear regime, the role of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase-mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave group velocity with analytical linear predictions. In the nonlinear regime, comparison of HMHD and HVM simulations allows to point out the role of kinetic effe...

  12. Application of "parallel" moiré deflectometry and the single beam Z-scan technique in the measurement of the nonlinear refractive index.

    Science.gov (United States)

    Rasouli, Saifollah; Ghasemi, H; Tavassoly, M T; Khalesifard, H R

    2011-06-01

    In this paper, the application of "parallel" moiré deflectometry in measuring the nonlinear refractive index of materials is reported. In "parallel" moiré deflectometry the grating vectors are parallel, and the resulting moiré fringes are also parallel to the grating lines. Compared to "rotational" moiré deflectometry and the Z-scan technique, which cannot easily determine the moiré fringe's angle of rotation and is sensitive to power fluctuations, respectively, "parallel" moiré deflectometry is more reliable, which allows one to measure the radius of curvature of the light beam by measuring the moiré fringe spacing. The nonlinear refractive index of the sample, including the sense of the change, is obtained from the moiré fringe spacing curve. The method is applied for measuring the nonlinear refractive index of ferrofluids.

  13. Optimal management of large scale aquifers under uncertainty

    Science.gov (United States)

    Ghorbanidehno, H.; Kokkinaki, A.; Kitanidis, P. K.; Darve, E. F.

    2016-12-01

    Water resources systems, and especially groundwater reservoirs, are a valuable resource that is often being endangered by contamination and over-exploitation. Optimal control techniques can be applied for groundwater management to ensure the long-term sustainability of this vulnerable resource. Linear Quadratic Gaussian (LQG) control is an optimal control method that combines a Kalman filter for real time estimation with a linear quadratic regulator for dynamic optimization. The LQG controller can be used to determine the optimal controls (e.g. pumping schedule) upon receiving feedback about the system from incomplete noisy measurements. However, applying LQG control for systems of large dimension is computationally expensive. This work presents the Spectral Linear Quadratic Gaussian (SpecLQG) control, a new fast LQG controller that can be used for large scale problems. SpecLQG control combines the Spectral Kalman filter, which is a fast Kalman filter algorithm, with an efficient low rank LQR, and provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification and optimal control for linear and weakly non-linear systems. The computational cost of SpecLQG controller scales linearly with the number of unknowns, a great improvement compared to the quadratic cost of basic LQG. We demonstrate the accuracy and computational efficiency of SpecLQG control using two applications: first, a linear validation case for pumping schedule management in a small homogeneous confined aquifer; and second, a larger scale nonlinear case with unknown heterogeneities in aquifer properties and boundary conditions.

  14. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    Science.gov (United States)

    Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T.; Dilts, Gary A.

    2015-04-01

    This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.

  15. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  16. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  17. CROW for large scale macromolecular simulations.

    Science.gov (United States)

    Hodoscek, Milan; Borstnik, Urban; Janezic, Dusanka

    2002-01-01

    CROW (Columns and Rows Of Workstations - http://www.sicmm.org/crow/) is a parallel computer cluster based on the Beowulf (http://www.beowulf.org/) idea, modified to support a larger number of processors. Its architecture is based on point-to-point network architecture, which does not require the use of any network switching equipment in the system. Thus, the cost is lower, and there is no degradation in network performance even for a larger number of processors.

  18. Large Scale Flame Spread Environmental Characterization Testing

    Science.gov (United States)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  19. Synchronization of coupled large-scale Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fangfei, E-mail: li-fangfei@163.com [Department of Mathematics, East China University of Science and Technology, No. 130, Meilong Road, Shanghai, Shanghai 200237 (China)

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  20. Synchronization of coupled large-scale Boolean networks

    Science.gov (United States)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  1. Sheltering in buildings from large-scale outdoor releases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, W.R.; Price, P.N.; Gadgil, A.J.

    2004-06-01

    Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.

  2. Large-scale quantum photonic circuits in silicon

    Science.gov (United States)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  3. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  4. Large scale structure from viscous dark matter

    Science.gov (United States)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  5. Large scale structure from viscous dark matter

    CERN Document Server

    Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-01-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...

  6. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  7. Optical encryption of parallel quadrature phase shift keying signals based on nondegenerate four-wave mixing in highly nonlinear fiber

    Science.gov (United States)

    Cui, Yue; Zhang, Min; Zhan, Yueying; Wang, Danshi; Huang, Shanguo

    2016-08-01

    A scheme for optical parallel encryption/decryption of quadrature phase shift keying (QPSK) signals is proposed, in which three QPSK signals at 10 Gb/s are encrypted and decrypted simultaneously in the optical domain through nondegenerate four-wave mixing in a highly nonlinear fiber. The results of theoretical analysis and simulations show that the scheme can perform high-speed wiretapping against the encryption of parallel signals and receiver sensitivities of encrypted signal and the decrypted signal are -25.9 and -23.8 dBm, respectively, at the forward error correction threshold. The results are useful for designing high-speed encryption/decryption of advanced modulated signals and thus enhancing the physical layer security of optical networks.

  8. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  9. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  10. Decentralized Nonlinear Controller Based SiC Parallel DC-DC Converter Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is aimed at demonstrating the feasibility of a Decentralized Control based SiC Parallel DC-DC Converter Unit (DDCU) with targeted application for...

  11. Large-scale GW software development

    Science.gov (United States)

    Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab

    Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.

  12. Python for large-scale electrophysiology

    Directory of Open Access Journals (Sweden)

    Martin A Spacek

    2009-01-01

    Full Text Available Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54 channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analyzing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (dimstim; one for electrophysiological waveform visualization and spike sorting (spyke; and one for spike train and stimulus analysis (neuropy. All three are open source and available for download (http://swindale.ecc.ubc.ca/code. The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  13. Python for large-scale electrophysiology.

    Science.gov (United States)

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  14. A generic library for large scale solution of PDEs on modern heterogeneous architectures

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter

    2012-01-01

    Adapting to new programming models for modern multi- and many-core architectures requires code-rewriting and changing algorithms and data structures, in order to achieve good efficiency and scalability. We present a generic library for solving large scale partial differential equations (PDEs....... As a proto-type for large scale PDE solvers, we present the assembling of a tool for simulation of three dimensional fully nonlinear water waves. Measurements show scalable performance results - in the same order as a dedicated non-library version of the wave tool. Introducing a domain decomposition...

  15. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  16. Null space imaging: nonlinear magnetic encoding fields designed complementary to receiver coil sensitivities for improved acceleration in parallel imaging.

    Science.gov (United States)

    Tam, Leo K; Stockmann, Jason P; Galiana, Gigi; Constable, R Todd

    2012-10-01

    To increase image acquisition efficiency, we develop alternative gradient encoding strategies designed to provide spatial encoding complementary to the spatial encoding provided by the multiple receiver coil elements in parallel image acquisitions. Intuitively, complementary encoding is achieved when the magnetic field encoding gradients are designed to encode spatial information where receiver spatial encoding is ambiguous, for example, along sensitivity isocontours. Specifically, the method generates a basis set for the null space of the coil sensitivities with the singular value decomposition and calculates encoding fields from the null space vectors. A set of nonlinear gradients is used as projection imaging readout magnetic fields, replacing the conventional linear readout field and phase encoding. Multiple encoding fields are used as projections to capture the null space information, hence the term null space imaging. The method is compared to conventional Cartesian SENSitivity Encoding as evaluated by mean squared error and robustness to noise. Strategies for developments in the area of nonlinear encoding schemes are discussed. The null space imaging approach yields a parallel imaging method that provides high acceleration factors with a limited number of receiver coil array elements through increased time efficiency in spatial encoding.

  17. A Large Scale Virtual Gas Sensor Array

    Science.gov (United States)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  18. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  19. Multitree Algorithms for Large-Scale Astrostatistics

    Science.gov (United States)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  20. Nonlinear Modeling of Azimuth Error for 2D Car Navigation Using Parallel Cascade Identification Augmented with Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Umar Iqbal

    2010-01-01

    Full Text Available Present land vehicle navigation relies mostly on the Global Positioning System (GPS that may be interrupted or deteriorated in urban areas. In order to obtain continuous positioning services in all environments, GPS can be integrated with inertial sensors and vehicle odometer using Kalman filtering (KF. For car navigation, low-cost positioning solutions based on MEMS-based inertial sensors are utilized. To further reduce the cost, a reduced inertial sensor system (RISS consisting of only one gyroscope and speed measurement (obtained from the car odometer is integrated with GPS. The MEMS-based gyroscope measurement deteriorates over time due to different errors like the bias drift. These errors may lead to large azimuth errors and mitigating the azimuth errors requires robust modeling of both linear and nonlinear effects. Therefore, this paper presents a solution based on Parallel Cascade Identification (PCI module that models the azimuth errors and is augmented to KF. The proposed augmented KF-PCI method can handle both linear and nonlinear system errors as the linear parts of the errors are modeled inside the KF and the nonlinear and residual parts of the azimuth errors are modeled by PCI. The performance of this method is examined using road test experiments in a land vehicle.

  1. Linear and nonlinear excitations in two stacks of parallel arrays of long Josephson junctions

    DEFF Research Database (Denmark)

    Carapella, G.; Constabile, Giovanni; Latempa, R.;

    2000-01-01

    We investigate a structure consisting of two parallel arrays of long Josephson junctions sharing a common electrode that allows inductive coupling between the arrays. A model for this structure is derived starting from the description of its continuous limit. The excitation of linear cavity modes...

  2. Large-scale bias of dark matter halos

    CERN Document Server

    Valageas, Patrick

    2010-01-01

    We build a simple analytical model for the bias of dark matter halos that applies to objects defined by an arbitrary density threshold, $200\\leq\\delta\\leq 1600$, and that provides accurate predictions from low-mass to high-mass halos. We point out that it is possible to build simple and efficient models, with no free parameter for the halo bias, by using integral constraints that govern the behavior of low-mass and typical halos, whereas the properties of rare massive halos are derived through explicit asymptotic approaches. We also describe how to take into account the impact of halo motions on their bias, using their linear displacement field. We obtain a good agreement with numerical simulations for the halo mass functions and large-scale bias at redshifts $0\\leq z \\leq 2.5$, for halos defined by nonlinear density threshold $200\\leq\\delta\\leq 1600$. We also evaluate the impact on the halo bias of two common approximations, i) neglecting halo motions, and ii) linearizing the halo two-point correlation.

  3. Impact of large scale flows on turbulent transport

    Science.gov (United States)

    Sarazin, Y.; Grandgirard, V.; Dif-Pradalier, G.; Fleurence, E.; Garbet, X.; Ghendrih, Ph; Bertrand, P.; Besse, N.; Crouseilles, N.; Sonnendrücker, E.; Latu, G.; Violard, E.

    2006-12-01

    The impact of large scale flows on turbulent transport in magnetized plasmas is explored by means of various kinetic models. Zonal flows are found to lead to a non-linear upshift of turbulent transport in a 3D kinetic model for interchange turbulence. Such a transition is absent from fluid simulations, performed with the same numerical tool, which also predict a much larger transport. The discrepancy cannot be explained by zonal flows only, despite they being overdamped in fluids. Indeed, some difference remains, although reduced, when they are artificially suppressed. Zonal flows are also reported to trigger transport barriers in a 4D drift-kinetic model for slab ion temperature gradient (ITG) turbulence. The density gradient acts as a source drive for zonal flows, while their curvature back stabilizes the turbulence. Finally, 5D simulations of toroidal ITG modes with the global and full-f GYSELA code require the equilibrium density function to depend on the motion invariants only. If not, the generated strong mean flows can completely quench turbulent transport.

  4. Impact of large scale flows on turbulent transport

    Energy Technology Data Exchange (ETDEWEB)

    Sarazin, Y [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Grandgirard, V [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Dif-Pradalier, G [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Fleurence, E [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Garbet, X [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Ghendrih, Ph [Association Euratom-CEA, CEA/DSM/DRFC centre de Cadarache, 13108 St-Paul-Lez-Durance (France); Bertrand, P [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Besse, N [LPMIA-Universite Henri Poincare Nancy I, Boulevard des Aiguillettes BP239, 54506 Vandoe uvre-les-Nancy (France); Crouseilles, N [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Sonnendruecker, E [IRMA, UMR 7501 CNRS/Universite Louis Pasteur, 7 rue Rene Descartes, 67084 Strasbourg (France); Latu, G [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France); Violard, E [LSIIT, UMR 7005 CNRS/Universite Louis Pasteur, Bd Sebastien Brant BP10413, 67412 Illkirch (France)

    2006-12-15

    The impact of large scale flows on turbulent transport in magnetized plasmas is explored by means of various kinetic models. Zonal flows are found to lead to a non-linear upshift of turbulent transport in a 3D kinetic model for interchange turbulence. Such a transition is absent from fluid simulations, performed with the same numerical tool, which also predict a much larger transport. The discrepancy cannot be explained by zonal flows only, despite they being overdamped in fluids. Indeed, some difference remains, although reduced, when they are artificially suppressed. Zonal flows are also reported to trigger transport barriers in a 4D drift-kinetic model for slab ion temperature gradient (ITG) turbulence. The density gradient acts as a source drive for zonal flows, while their curvature back stabilizes the turbulence. Finally, 5D simulations of toroidal ITG modes with the global and full-f GYSELA code require the equilibrium density function to depend on the motion invariants only. If not, the generated strong mean flows can completely quench turbulent transport.

  5. The Cross Correlation between the Gravitational Potential and the Large Scale Matter Distribution

    CERN Document Server

    Madsen, S; Gottlöber, S; Müller, V; Madsen, Soeren; Doroshkevich, Andrei G.; Gottloeber, Stefan; Müller, Volker

    1997-01-01

    The large scale gravitational potential distribution and its influence on the large-scale matter clustering is considered on the basis of six simulations. It is found that the mean separation between zero levels of the potential along random straight lines coincides with the theoretical expectations, but it scatters largely. A strong link of the initial potential and the structure evolution is shown. It is found that the under-dense and over-dense regions correlate with regions of positive and negative gravitational potential at large redshifts. The over-dense regions arise due to a slow matter flow into the negative potential regions, where more pronounced non-linear structures appear. Such regions are related to the formation of huge super-large scale structures seen in the galaxy distribution.

  6. Large Scale Implementations for Twitter Sentiment Classification

    Directory of Open Access Journals (Sweden)

    Andreas Kanavos

    2017-03-01

    Full Text Available Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide spectrum of topics. This amount of information offers huge potential and can be harnessed to receive the sentiment tendency towards these topics. However, since no one can invest an infinite amount of time to read through these tweets, an automated decision making approach is necessary. Nevertheless, most existing solutions are limited in centralized environments only. Thus, they can only process at most a few thousand tweets. Such a sample is not representative in order to define the sentiment polarity towards a topic due to the massive number of tweets published daily. In this work, we develop two systems: the first in the MapReduce and the second in the Apache Spark framework for programming with Big Data. The algorithm exploits all hashtags and emoticons inside a tweet, as sentiment labels, and proceeds to a classification method of diverse sentiment types in a parallel and distributed manner. Moreover, the sentiment analysis tool is based on Machine Learning methodologies alongside Natural Language Processing techniques and utilizes Apache Spark’s Machine learning library, MLlib. In order to address the nature of Big Data, we introduce some pre-processing steps for achieving better results in Sentiment Analysis as well as Bloom filters to compact the storage size of intermediate data and boost the performance of our algorithm. Finally, the proposed system was trained and validated with real data crawled by Twitter, and, through an extensive experimental evaluation, we prove that our solution is efficient, robust and scalable while confirming the quality of our sentiment identification.

  7. Numerical Simulation of Multi-phase Flow in Porous Media on Parallel Computers

    CERN Document Server

    Liu, Hui; Chen, Zhangxin; Luo, Jia; Deng, Hui; He, Yanfeng

    2016-01-01

    This paper is concerned with developing parallel computational methods for two-phase flow on distributed parallel computers; techniques for linear solvers and nonlinear methods are studied, and the standard and inexact Newton methods are investigated. A multi-stage preconditioner for two-phase flow is proposed and advanced matrix processing strategies are implemented. Numerical experiments show that these computational methods are scalable and efficient, and are capable of simulating large-scale problems with tens of millions of grid blocks using thousands of CPU cores on parallel computers. The nonlinear techniques, preconditioner and matrix processing strategies can also be applied to three-phase black oil, compositional and thermal models.

  8. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  9. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  10. Large scale photovoltaic field trials. Second technical report: monitoring phase

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-09-15

    This report provides an update on the Large-Scale Building Integrated Photovoltaic Field Trials (LS-BIPV FT) programme commissioned by the Department of Trade and Industry (Department for Business, Enterprise and Industry; BERR). It provides detailed profiles of the 12 projects making up this programme, which is part of the UK programme on photovoltaics and has run in parallel with the Domestic Field Trial. These field trials aim to record the experience and use the lessons learnt to raise awareness of, and confidence in, the technology and increase UK capabilities. The projects involved: the visitor centre at the Gaia Energy Centre in Cornwall; a community church hall in London; council offices in West Oxfordshire; a sports science centre at Gloucester University; the visitor centre at Cotswold Water Park; the headquarters of the Insolvency Service; a Welsh Development Agency building; an athletics centre in Birmingham; a research facility at the University of East Anglia; a primary school in Belfast; and Barnstable civic centre in Devon. The report describes the aims of the field trials, monitoring issues, performance, observations and trends, lessons learnt and the results of occupancy surveys.

  11. PetClaw: A scalable parallel nonlinear wave propagation solver for Python

    KAUST Repository

    Alghamdi, Amal

    2011-01-01

    We present PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation. PetClaw unifies two well-known scientific computing packages, Clawpack and PETSc, using Python interfaces into both. We rely on Clawpack to provide the infrastructure and kernels for time-dependent nonlinear wave propagation. Similarly, we rely on PETSc to manage distributed data arrays and the communication between them.We describe both the implementation and performance of PetClaw as well as our challenges and accomplishments in scaling a Python-based code to tens of thousands of cores on the BlueGene/P architecture. The capabilities of PetClaw are demonstrated through application to a novel problem involving elastic waves in a heterogeneous medium. Very finely resolved simulations are used to demonstrate the suppression of shock formation in this system.

  12. BFAST: an alignment tool for large scale genome resequencing.

    Directory of Open Access Journals (Sweden)

    Nils Homer

    Full Text Available BACKGROUND: The new generation of massively parallel DNA sequencers, combined with the challenge of whole human genome resequencing, result in the need for rapid and accurate alignment of billions of short DNA sequence reads to a large reference genome. Speed is obviously of great importance, but equally important is maintaining alignment accuracy of short reads, in the 25-100 base range, in the presence of errors and true biological variation. METHODOLOGY: We introduce a new algorithm specifically optimized for this task, as well as a freely available implementation, BFAST, which can align data produced by any of current sequencing platforms, allows for user-customizable levels of speed and accuracy, supports paired end data, and provides for efficient parallel and multi-threaded computation on a computer cluster. The new method is based on creating flexible, efficient whole genome indexes to rapidly map reads to candidate alignment locations, with arbitrary multiple independent indexes allowed to achieve robustness against read errors and sequence variants. The final local alignment uses a Smith-Waterman method, with gaps to support the detection of small indels. CONCLUSIONS: We compare BFAST to a selection of large-scale alignment tools -- BLAT, MAQ, SHRiMP, and SOAP -- in terms of both speed and accuracy, using simulated and real-world datasets. We show BFAST can achieve substantially greater sensitivity of alignment in the context of errors and true variants, especially insertions and deletions, and minimize false mappings, while maintaining adequate speed compared to other current methods. We show BFAST can align the amount of data needed to fully resequence a human genome, one billion reads, with high sensitivity and accuracy, on a modest computer cluster in less than 24 hours. BFAST is available at (http://bfast.sourceforge.net.

  13. Development of large-scale structure in the Universe

    CERN Document Server

    Ostriker, J P

    1991-01-01

    This volume grew out of the 1988 Fermi lectures given by Professor Ostriker, and is concerned with cosmological models that take into account the large scale structure of the universe. He starts with homogeneous isotropic models of the universe and then, by considering perturbations, he leads us to modern cosmological theories of the large scale, such as superconducting strings. This will be an excellent companion for all those interested in the cosmology and the large scale nature of the universe.

  14. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    Co-clustering is a problem of both theoretical and practical importance, e.g., market basket analysis and collaborative filtering, and in web scale text processing. We state the co-clustering problem in terms of non-parametric generative models which can address the issue of estimating the number...... of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale......-life large scale collaborative filtering data and web scale text corpora, demonstrating that latent mesoscale structures extracted by the co-clustering problem as formulated by the Infinite Relational Model (IRM) are consistent across consecutive runs with different initializations and also relevant...

  15. Design of Large-Scale Sensory Data Processing System Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bing Tang

    2012-04-01

    Full Text Available In large-scale Wireless Sensor Networks (WSNs, with limited computing power and storage capacity of sensor nodes, there is an urgent demand of high performance sensory data processing. This study studies the interconnection of wireless sensor networks and cloud-based storage and computing infrastructure. It proposes the idea of distributed databases to store sensory data and MapReduce programming model for large-scale sensory data parallel processing. In our prototype of large-scale sensory data processing system, Hadoop Distributed File System (HDFS and HBase are used for sensory data storage, and Hadoop MapReduce is used for data processing application execution framework. The design and implementation of this system are described in detail. The simulation of environment temperature surveillance application is used to verify the feasibility and reasonableness of the system, which also proves that it significantly improves the data processing capability of WSNs.

  16. Solving large nonlinear generalized eigenvalue problems from Density Functional Theory calculations in parallel

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Nielsen, Ole Holm; Hansen, Lars Bruno

    2001-01-01

    propose a new algorithm of this kind which is well suited for the SCF method, since the accuracy of the eigensolution is gradually improved along with the outer SCF-iterations. Best efficiency is obtained for small-block-size iterations, and the algorithm is highly memory efficient. The implementation...... works well on both serial and parallel computers, and good scalability of the algorithm is obtained. (C) 2001 IMACS. Published by Elsevier Science B.V. All rights reserved....

  17. A Combined MPI-CUDA Parallel Solution of Linear and Nonlinear Poisson-Boltzmann Equation

    Science.gov (United States)

    Colmenares, José; Galizia, Antonella; Ortiz, Jesús; Clematis, Andrea; Rocchia, Walter

    2014-01-01

    The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs. PMID:25013789

  18. A combined MPI-CUDA parallel solution of linear and nonlinear Poisson-Boltzmann equation.

    Science.gov (United States)

    Colmenares, José; Galizia, Antonella; Ortiz, Jesús; Clematis, Andrea; Rocchia, Walter

    2014-01-01

    The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs.

  19. A Combined MPI-CUDA Parallel Solution of Linear and Nonlinear Poisson-Boltzmann Equation

    Directory of Open Access Journals (Sweden)

    José Colmenares

    2014-01-01

    Full Text Available The Poisson-Boltzmann equation models the electrostatic potential generated by fixed charges on a polarizable solute immersed in an ionic solution. This approach is often used in computational structural biology to estimate the electrostatic energetic component of the assembly of molecular biological systems. In the last decades, the amount of data concerning proteins and other biological macromolecules has remarkably increased. To fruitfully exploit these data, a huge computational power is needed as well as software tools capable of exploiting it. It is therefore necessary to move towards high performance computing and to develop proper parallel implementations of already existing and of novel algorithms. Nowadays, workstations can provide an amazing computational power: up to 10 TFLOPS on a single machine equipped with multiple CPUs and accelerators such as Intel Xeon Phi or GPU devices. The actual obstacle to the full exploitation of modern heterogeneous resources is efficient parallel coding and porting of software on such architectures. In this paper, we propose the implementation of a full Poisson-Boltzmann solver based on a finite-difference scheme using different and combined parallel schemes and in particular a mixed MPI-CUDA implementation. Results show great speedups when using the two schemes, achieving an 18.9x speedup using three GPUs.

  20. Large Scale, High Resolution, Mantle Dynamics Modeling

    Science.gov (United States)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    spherical models. We also applied the above mentioned method to a high resolution (~ 1 km) 2D mantle convection model with temperature, pressure and phase dependent rheology including several phase transitions. We focus on a model of a subducting lithospheric slab which is subject to strong folding at the bottom of the mantle's D" region which includes the postperovskite phase boundary. For a detailed description of this model we refer to poster [Mantel convection models of the D" region, U17] [Saad, 2003] Saad, Y. (2003). Iterative methods for sparse linear systems. [Sala, 2006] Sala. M (2006) An Object-Oriented Framework for the Development of Scalable Parallel Multilevel Preconditioners. ACM Transactions on Mathematical Software, 32 (3), 2006 [Patankar, 1980] Patankar, S. V.(1980) Numerical Heat Transfer and Fluid Flow, Hemisphere, Washington.

  1. On the self-sustained nature of large-scale motions in turbulent Couette flow

    CERN Document Server

    Rawat, Subhandu; Hwang, Yongyun; Rincon, François

    2015-01-01

    Large-scale motions in wall-bounded turbulent flows are frequently interpreted as resulting from an aggregation process of smaller-scale structures. Here, we explore the alternative possibility that such large-scale motions are themselves self-sustained and do not draw their energy from smaller-scale turbulent motions activated in buffer layers. To this end, it is first shown that large-scale motions in turbulent Couette flow at Re=2150 self-sustain even when active processes at smaller scales are artificially quenched by increasing the Smagorinsky constant Cs in large eddy simulations. These results are in agreement with earlier results on pressure driven turbulent channels. We further investigate the nature of the large-scale coherent motions by computing upper and lower-branch nonlinear steady solutions of the filtered (LES) equations with a Newton-Krylov solver,and find that they are connected by a saddle-node bifurcation at large values of Cs. Upper branch solutions for the filtered large scale motions a...

  2. Alignment between galaxies and large-scale structure

    Institute of Scientific and Technical Information of China (English)

    A. Faltenbacher; Cheng Li; Simon D. M. White; Yi-Peng Jing; Shu-De Mao; Jie Wang

    2009-01-01

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale struc-ture. For this purpose, we develop two new statistical tools, namely the alignment cor-relation function and the cos(20)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy cat-alog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L L*) galaxies out to projected separations of 60 h-1Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ~ 25°. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for cen-tral galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference galaxy

  3. Remarks on nonlinear relation among phases and frequencies in modulational instabilities of parallel propagating Alfven waves

    CERN Document Server

    Nariyuki, Y; Nariyuki, Yasuhiro; Hada, Tohru

    2006-01-01

    Nonlinear relations among frequencies and phases in modulational instability of circularly polarized Alfven waves are discussed, within the context of one dimensional, dissipation-less, unforced fluid system. We show that generation of phase coherence is a natural consequence of the modulational instability of Alfven waves. Furthermore, we quantitatively evaluate intensity of wave-wave interaction by using bi-coherence, and also by computing energy flow among wave modes, and demonstrate that the energy flow is directly related to the phase coherence generation.

  4. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Directory of Open Access Journals (Sweden)

    Hui He

    2013-01-01

    Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  5. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    Science.gov (United States)

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  6. Towards a Gravity Dual for the Large Scale Structure of the Universe

    DEFF Research Database (Denmark)

    Kehagias, A.; Riotto, Antonio; Sloth, M. S.

    2016-01-01

    The dynamics of the large-scale structure of the universe enjoys at all scales, even in the highly non-linear regime, a Lifshitz symmetry during the matter-dominated period. In this paper we propose a general class of six-dimensional spacetimes which could be a gravity dual to the four......-dimensional large-scale structure of the universe. In this set-up, the Lifshitz symmetry manifests itself as an isometry in the bulk and our universe is a four-dimensional brane moving in such six-dimensional bulk. After finding the correspondence between the bulk and the brane dynamical Lifshitz exponents, we find...... the intriguing result that the preferred value of the dynamical Lifshitz exponent of our observed universe, at both linear and non-linear scales, corresponds to a fixed point of the RGE flow of the dynamical Lifshitz exponent in the dual system where the symmetry is enhanced to the Schrodinger group containing...

  7. Identical parallel machine scheduling with nonlinear deterioration and multiple rate modifying activities

    Directory of Open Access Journals (Sweden)

    Ömer Öztürkoğlu

    2017-07-01

    Full Text Available This study focuses on identical parallel machine scheduling of jobs with deteriorating processing times and rate-modifying activities. We consider non linearly increasing processing times of jobs based on their position assignment. Rate modifying activities are also considered to recover the increase in processing times of jobs due to deterioration. We also propose heuristics algorithms that rely on ant colony optimization and simulated annealing algorithms to solve the problem with multiple RMAs in a reasonable amount of time. Finally, we show that ant colony optimization algorithm generates close optimal solutions and superior results than simulated annealing algorithm.

  8. Robust nonlinear PID-like fuzzy logic control of a planar parallel (2PRP-PPR) manipulator.

    Science.gov (United States)

    Londhe, P S; Singh, Yogesh; Santhakumar, M; Patre, B M; Waghmare, L M

    2016-07-01

    In this paper, a robust nonlinear proportional-integral-derivative (PID)-like fuzzy control scheme is presented and applied to complex trajectory tracking control of a 2PRP-PPR (P-prismatic, R-revolute) planar parallel manipulator (motion platform) with three degrees-of-freedom (DOF) in the presence of parameter uncertainties and external disturbances. The proposed control law consists of mainly two parts: first part uses a feed forward term to enhance the control activity and estimated perturbed term to compensate for the unknown effects namely external disturbances and unmodeled dynamics, and the second part uses a PID-like fuzzy logic control as a feedback portion to enhance the overall closed-loop stability of the system. Experimental results are presented to show the effectiveness of the proposed control scheme.

  9. Safeguards instruments for Large-Scale Reprocessing Plants

    Energy Technology Data Exchange (ETDEWEB)

    Hakkila, E.A. [Los Alamos National Lab., NM (United States); Case, R.S.; Sonnier, C. [Sandia National Labs., Albuquerque, NM (United States)

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  10. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system. I

  11. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system.

  12. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  13. Large-scale simulations of layered double hydroxide nanocomposite materials

    Science.gov (United States)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  14. Kinetic Alfvén Wave Generation by Large-scale Phase Mixing

    Science.gov (United States)

    Vásconez, C. L.; Pucci, F.; Valentini, F.; Servidio, S.; Matthaeus, W. H.; Malara, F.

    2015-12-01

    One view of the solar wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length dp may be considered as kinetic Alfvén waves (KAWs). In the present paper, we show how phase mixing of large-scale parallel-propagating Alfvén waves is an efficient mechanism for the production of KAWs at wavelengths close to dp and at a large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall magnetohydrodynamic (HMHD), and hybrid Vlasov-Maxwell (HVM) simulations modeling the propagation of Alfvén waves in inhomogeneous plasmas are performed. In the linear regime, the role of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave-group velocity with analytical linear predictions. In the nonlinear regime, a comparison of HMHD and HVM simulations allows us to point out the role of kinetic effects in shaping the proton-distribution function. We observe the generation of temperature anisotropy with respect to the local magnetic field and the production of field-aligned beams. The regions where the proton-distribution function highly departs from thermal equilibrium are located inside the shear layers, where the KAWs are excited, this suggesting that the distortions of the proton distribution are driven by a resonant interaction of protons with KAW fluctuations. Our results are relevant in configurations where magnetic-field inhomogeneities are present, as, for example, in the solar corona, where the presence of Alfvén waves has been ascertained.

  15. KINETIC ALFVÉN WAVE GENERATION BY LARGE-SCALE PHASE MIXING

    Energy Technology Data Exchange (ETDEWEB)

    Vásconez, C. L.; Pucci, F.; Valentini, F.; Servidio, S.; Malara, F. [Dipartimento di Fisica, Università della Calabria, I-87036, Rende (CS) (Italy); Matthaeus, W. H. [Department of Physics and Astronomy, University of Delaware, DE 19716 (United States)

    2015-12-10

    One view of the solar wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length d{sub p} may be considered as kinetic Alfvén waves (KAWs). In the present paper, we show how phase mixing of large-scale parallel-propagating Alfvén waves is an efficient mechanism for the production of KAWs at wavelengths close to d{sub p} and at a large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall magnetohydrodynamic (HMHD), and hybrid Vlasov–Maxwell (HVM) simulations modeling the propagation of Alfvén waves in inhomogeneous plasmas are performed. In the linear regime, the role of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave-group velocity with analytical linear predictions. In the nonlinear regime, a comparison of HMHD and HVM simulations allows us to point out the role of kinetic effects in shaping the proton-distribution function. We observe the generation of temperature anisotropy with respect to the local magnetic field and the production of field-aligned beams. The regions where the proton-distribution function highly departs from thermal equilibrium are located inside the shear layers, where the KAWs are excited, this suggesting that the distortions of the proton distribution are driven by a resonant interaction of protons with KAW fluctuations. Our results are relevant in configurations where magnetic-field inhomogeneities are present, as, for example, in the solar corona, where the presence of Alfvén waves has been ascertained.

  16. Sheared stably stratified turbulence and large-scale waves in a lid driven cavity

    CERN Document Server

    Cohen, N; Elperin, T; Kleeorin, N; Rogachevskii, I

    2014-01-01

    We investigated experimentally stably stratified turbulent flows in a lid driven cavity with a non-zero vertical mean temperature gradient in order to identify the parameters governing the mean and turbulent flows and to understand their effects on the momentum and heat transfer. We found that the mean velocity patterns (e.g., the form and the sizes of the large-scale circulations) depend strongly on the degree of the temperature stratification. In the case of strong stable stratification, the strong turbulence region is located in the vicinity of the main large-scale circulation. We detected the large-scale nonlinear oscillations in the case of strong stable stratification which can be interpreted as nonlinear internal gravity waves. The ratio of the main energy-containing frequencies of these waves in velocity and temperature fields in the nonlinear stage is about 2. The amplitude of the waves increases in the region of weak turbulence (near the bottom wall of the cavity), whereby the vertical mean temperat...

  17. A nonlinear analytical model for the squeeze-film dynamics of parallel plates subjected to axial flow

    Energy Technology Data Exchange (ETDEWEB)

    Piteau, Ph. [CEA Saclay, DEN, DM2S, SEMT, DYN, CEA, Lab Etud Dynam, F-91191 Gif Sur Yvette (France); Antunes, J. [ITN, ADL, P-2686 Sacavem Codex (Portugal)

    2010-07-01

    In this paper, we develop a theoretical model to predict the nonlinear fluid-structure interaction forces and the dynamics of parallel vibrating plates subjected to an axial gap flow. The gap is assumed small, when compared to the plate dimensions, the plate width being much larger than the length, so that the simplifying assumptions of 1D bulk-flow models are adequate. We thus develop a simplified theoretical squeeze-film formulation, which includes both the distributed and singular dissipative flow terms. This model is suitable for performing effective time-domain numerical simulations of vibrating systems which are coupled by the nonlinear unsteady flow forces, for instance the vibro-impact dynamics of plates with fluid gap interfaces. A linearized version of the flow model is also presented and discussed, which is appropriate for studying the complex modes and linear stability of flow/structure coupled systems as a function of the average axial gap velocity. Two applications of our formulation are presented: (1) first we study how an axial flow modifies the rigid-body motion of immersed plates falling under gravity; (2) then we compute the dynamical behavior of an immersed oscillating plate as a function of the axial gap flow velocity. Linear stability plots of oscillating plates are shown, as a function of the average fluid gap and of the axial flow velocity, for various scenarios of the loss terms. These results highlight the conditions leading to either the divergence or flutter instabilities. Numerical simulations of the nonlinear flow/structure dynamical responses are also presented, for both stable and unstable regimes. This work is of interest to a large body of real-life problems, for instance the dynamics of nuclear spent fuel racks immersed in a pool when subjected to seismic excitations, or the self-excited vibro-impact motions of valve-like components under axial flows. (authors)

  18. Forced convection heat transfer of Couette-Poiseuille flow of nonlinear viscoelastic fluids between parallel plates

    Energy Technology Data Exchange (ETDEWEB)

    Hashemabadi, S.H. [Iran Univ. of Science and Technology, Dept. of Chemical Engineering, Tehran (Iran); Etemad, S.Gh. [Isfahan Univ. of Technology, Dept. of Chemical Engineering, Isfahan (Israel); Thibault, J. [Ottawa Univ., Dept. of Chemical Engineering, Ottawa, ON (Canada)

    2004-08-01

    Heat transfer to viscoelastic fluids is frequently encountered in various industrial processing. In this investigation an analytical solution was obtained to predict the fully developed, steady and laminar heat transfer of viscoelastic fluids between parallel plates. One of the plates was stationary and was subjected to a constant heat flux. The other plate moved with constant velocity and was insulated. The simplified Phan-Thien-Tanner (SPTT) model, believed to be a more realistic model for viscoelastic fluids, was used to represent the rheological behavior of the fluid. The energy equation was solved for a wide range of Brinkman number, dimensionless viscoelastic group, and dimensionless pressure drop. Results highlight the strong effects of these parameters on the heat transfer rate. (Author)

  19. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  20. Large-scale numerical simulation of rotationally constrained convection

    Science.gov (United States)

    Sprague, Michael; Julien, Keith; Knobloch, Edgar; Werne, Joseph; Weiss, Jeffrey

    2007-11-01

    Using direct numerical simulation (DNS), we investigate solutions of an asymptotically reduced system of nonlinear PDEs for rotationally constrained convection. The reduced equations filter fast inertial waves and relax the need to resolve Ekman boundary layers, which allow exploration of a parameter range inaccessible with DNS of the full Boussinesq equations. The equations are applicable to ocean deep convection, which is characterized by small Rossby number and large Rayleigh number. Previous numerical studies of the reduced equations examined upright convection where the gravity vector was anti-parallel to the rotation vector. In addition to the columnar and geostrophic-turbulence regimes, simulations revealed a third regime where Taylor columns were shielded by sleeves of opposite-signed vorticity. We here extend our numerical simulations to examine both upright and tilted convection at high Rayleigh numbers.

  1. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  2. Large-scale electrohydrodynamic organic nanowire printing, lithography, and electronics

    Science.gov (United States)

    Lee, Tae-Woo

    2014-03-01

    Although the many merits of organic nanowires (NWs), a reliable process for controllable and large-scale assembly of highly-aligned NW parallel arrays based on ``individual control (IC)'' of NWs must be developed since inorganic NWs are mainly grown vertically on substrates and thus have been transferred to the target substrates by any of several non-individually controlled (non-IC) methods such as contact-printing technologies with unidirectional massive alignment, and the random dispersion method with disordered alignment. Controlled alignment and patterning of individual semiconducting NWs at a desired position in a large area is a major requirement for practical electronic device applications. Large-area, high-speed printing of highly-aligned individual NWs that allows control of the exact numbers of wires, and dimensions and their orientations, and its use in high-speed large-area nanolithography is a significant challenge for practical applications. Here we use a high-speed electrohydrodynamic organic nanowire printer to print large-area organic semiconducting nanowire arrays directly on device substrates in an accurately individually-controlled manner; this method also enables sophisticated large-area nanowire lithography for nano-electronics. We achieve an unprecedented high maximum field-effect mobility up to 9.7 cm2 .V-1 .s-1 with extremely low contact resistance (<5.53 Ω . cm) even in nano-channel transistors based on single-stranded semiconducting NWs. We also demonstrate complementary inverter circuit arrays consist of well-aligned p-type and n-type organic semiconducting NWs. Extremely fast nanolithography using printed semiconducting nanowire arrays provide a very simple, reliable method of fabricating large-area and flexible nano-electronics.

  3. Distribution probability of large-scale landslides in central Nepal

    Science.gov (United States)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  4. A Scalar Field Dark Matter Model and Its Role in the Large-Scale Structure Formation in the Universe

    Directory of Open Access Journals (Sweden)

    Mario A. Rodríguez-Meza

    2012-01-01

    Full Text Available We present a model of dark matter based on scalar-tensor theory of gravity. With this scalar field dark matter model we study the non-linear evolution of the large-scale structures in the universe. The equations that govern the evolution of the scale factor of the universe are derived together with the appropriate Newtonian equations to follow the nonlinear evolution of the structures. Results are given in terms of the power spectrum that gives quantitative information on the large-scale structure formation. The initial conditions we have used are consistent with the so-called concordance ΛCDM model.

  5. Organised convection embedded in a large-scale flow

    Science.gov (United States)

    Naumann, Ann Kristin; Stevens, Bjorn; Hohenegger, Cathy

    2017-04-01

    In idealised simulations of radiative convective equilibrium, convection aggregates spontaneously from randomly distributed convective cells into organized mesoscale convection despite homogeneous boundary conditions. Although these simulations apply very idealised setups, the process of self-aggregation is thought to be relevant for the development of tropical convective systems. One feature that idealised simulations usually neglect is the occurrence of a large-scale background flow. In the tropics, organised convection is embedded in a large-scale circulation system, which advects convection in along-wind direction and alters near surface convergence in the convective areas. A large-scale flow also modifies the surface fluxes, which are expected to be enhanced upwind of the convective area if a large-scale flow is applied. Convective clusters that are embedded in a large-scale flow therefore experience an asymmetric component of the surface fluxes, which influences the development and the pathway of a convective cluster. In this study, we use numerical simulations with explicit convection and add a large-scale flow to the established setup of radiative convective equilibrium. We then analyse how aggregated convection evolves when being exposed to wind forcing. The simulations suggest that convective line structures are more prevalent if a large-scale flow is present and that convective clusters move considerably slower than advection by the large-scale flow would suggest. We also study the asymmetric component of convective aggregation due to enhanced surface fluxes, and discuss the pathway and speed of convective clusters as a function of the large-scale wind speed.

  6. Probabilistic cartography of the large-scale structure

    CERN Document Server

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin

    2015-01-01

    The BORG algorithm is an inference engine that derives the initial conditions given a cosmological model and galaxy survey data, and produces physical reconstructions of the underlying large-scale structure by assimilating the data into the model. We present the application of BORG to real galaxy catalogs and describe the primordial and late-time large-scale structure in the considered volumes. We then show how these results can be used for building various probabilistic maps of the large-scale structure, with rigorous propagation of uncertainties. In particular, we study dynamic cosmic web elements and secondary effects in the cosmic microwave background.

  7. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  8. Big data Mining Using Very-Large-Scale Data Processing Platforms

    Directory of Open Access Journals (Sweden)

    Ms. K. Deepthi

    2016-02-01

    Full Text Available Big Data consists of large-volume, complex, growing data sets with multiple, heterogenous sources. With the tremendous development of networking, data storage, and the data collection capacity, Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. The MapReduce programming mode which has parallel processing ability to analyze the large-scale network. MapReduce is a programming model that allows easy development of scalable parallel applications to process big data on large clusters of commodity machines. Google’s MapReduce or its open-source equivalent Hadoop is a powerful tool for building such applications.

  9. PetClaw: Parallelization and Performance Optimization of a Python-Based Nonlinear Wave Propagation Solver Using PETSc

    KAUST Repository

    Alghamdi, Amal Mohammed

    2012-04-01

    Clawpack, a conservation laws package implemented in Fortran, and its Python-based version, PyClaw, are existing tools providing nonlinear wave propagation solvers that use state of the art finite volume methods. Simulations using those tools can have extensive computational requirements to provide accurate results. Therefore, a number of tools, such as BearClaw and MPIClaw, have been developed based on Clawpack to achieve significant speedup by exploiting parallel architectures. However, none of them has been shown to scale on a large number of cores. Furthermore, these tools, implemented in Fortran, achieve parallelization by inserting parallelization logic and MPI standard routines throughout the serial code in a non modular manner. Our contribution in this thesis research is three-fold. First, we demonstrate an advantageous use case of Python in implementing easy-to-use modular extensible scalable scientific software tools by developing an implementation of a parallelization framework, PetClaw, for PyClaw using the well-known Portable Extensible Toolkit for Scientific Computation, PETSc, through its Python wrapper petsc4py. Second, we demonstrate the possibility of getting acceptable Python code performance when compared to Fortran performance after introducing a number of serial optimizations to the Python code including integrating Clawpack Fortran kernels into PyClaw for low-level computationally intensive parts of the code. As a result of those optimizations, the Python overhead in PetClaw for a shallow water application is only 12 percent when compared to the corresponding Fortran Clawpack application. Third, we provide a demonstration of PetClaw scalability on up to the entirety of Shaheen; a 16-rack Blue Gene/P IBM supercomputer that comprises 65,536 cores and located at King Abdullah University of Science and Technology (KAUST). The PetClaw solver achieved above 0.98 weak scaling efficiency for an Euler application on the whole machine excluding the

  10. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  11. The theory of large-scale ocean circulation

    National Research Council Canada - National Science Library

    Samelson, R. M

    2011-01-01

    "This is a concise but comprehensive introduction to the basic elements of the theory of large-scale ocean circulation for advanced students and researchers"-- "Mounting evidence that human activities...

  12. Learning networks for sustainable, large-scale improvement.

    Science.gov (United States)

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  13. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Document Server

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  14. An Evaluation Framework for Large-Scale Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    An evaluation framework for large-scale network structures is presented, which facilitates evaluations and comparisons of different physical network structures. A number of quantitative and qualitative parameters are presented, and their importance to networks discussed. Choosing a network...

  15. Modified gravity and large scale flows, a review

    Science.gov (United States)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  16. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU Qiang

    2004-01-01

    @@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.

  17. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU; Qiang

    2004-01-01

    The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……

  18. PetroChina to Expand Dushanzi Refinery on Large Scale

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    @@ A large-scale expansion project for PetroChina Dushanzi Petrochemical Company has been given the green light, a move which will make it one of the largest refineries and petrochemical complexes in the country.

  19. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  20. Disentangling the dynamic core: a research program for a neurodynamics at the large-scale.

    Science.gov (United States)

    Le Van Quyen, Michel

    2003-01-01

    My purpose in this paper is to sketch a research direction based on Francisco Varela's pioneering work in neurodynamics (see also Rudrauf et al. 2003, in this issue). Very early on he argued that the internal coherence of every mental-cognitive state lies in the global self-organization of the brain activities at the large-scale, constituting a fundamental pole of integration called here a "dynamic core". Recent neuroimaging evidence appears to broadly support this hypothesis and suggests that a global brain dynamics emerges at the large scale level from the cooperative interactions among widely distributed neuronal populations. Despite a growing body of evidence supporting this view, our understanding of these large-scale brain processes remains hampered by the lack of a theoretical language for expressing these complex behaviors in dynamical terms. In this paper, I propose a rough cartography of a comprehensive approach that offers a conceptual and mathematical framework to analyze spatio-temporal large-scale brain phenomena. I emphasize how these nonlinear methods can be applied, what property might be inferred from neuronal signals, and where one might productively proceed for the future. This paper is dedicated, with respect and affection, to the memory of Francisco Varela.

  1. Imaging radar observations and nonlocal theory of large-scale plasma waves in the equatorial electrojet

    Directory of Open Access Journals (Sweden)

    D. L. Hysell

    Full Text Available Large-scale (l ~ 1 km waves in the daytime and night-time equatorial electrojet are studied using coherent scatter radar data from Jicamarca. Images of plasma irregularities within the main beam of the radar are formed using interferometry with multiple baselines. These images are analyzed according to nonlocal gradient drift instability theory and are also compared to nonlinear computer simulations carried out recently by Ronchi et al. (1991 and Hu and Bhattacharjee (1999. In the daytime, the large-scale waves assume a non-steady dynamical equilibrium state characterized by the straining and destruction of the waves by shear and diffusion followed by spontaneous regeneration as predicted by Ronchi et al. (1991. At night, when steep plasma density gradients emerge, slowly propagating large-scale vertically extended waves predominate. Eikonal analysis suggests that these waves are trapped (absolutely unstable or are nearly trapped (convectively unstable and are able to tunnel between altitude regions which are locally unstable. Intermediate-scale waves are mainly transient (convectively stable but can become absolutely unstable in narrow altitude bands determined by the background density profile. These characteristics are mainly consistent with the simulations presented by Hu and Bhattacharjee (1999. A new class of large-scale primary waves is found to occur along bands that sweep westward and downward from high altitudes through the E-region at twilight.

    Key words. Ionosphere (equatorial ionosphere; ionospheric irregularities; plasma waves and instabilities

  2. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  3. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  4. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  5. Balancing modern Power System with large scale of wind power

    OpenAIRE

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  6. Vector dissipativity theory for large-scale impulsive dynamical systems

    Directory of Open Access Journals (Sweden)

    Haddad Wassim M.

    2004-01-01

    Full Text Available Modern complex large-scale impulsive systems involve multiple modes of operation placing stringent demands on controller analysis of increasing complexity. In analyzing these large-scale systems, it is often desirable to treat the overall impulsive system as a collection of interconnected impulsive subsystems. Solution properties of the large-scale impulsive system are then deduced from the solution properties of the individual impulsive subsystems and the nature of the impulsive system interconnections. In this paper, we develop vector dissipativity theory for large-scale impulsive dynamical systems. Specifically, using vector storage functions and vector hybrid supply rates, dissipativity properties of the composite large-scale impulsive systems are shown to be determined from the dissipativity properties of the impulsive subsystems and their interconnections. Furthermore, extended Kalman-Yakubovich-Popov conditions, in terms of the impulsive subsystem dynamics and interconnection constraints, characterizing vector dissipativeness via vector system storage functions, are derived. Finally, these results are used to develop feedback interconnection stability results for large-scale impulsive dynamical systems using vector Lyapunov functions.

  7. Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Jan Hesthaven

    2012-02-06

    Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.

  8. Simulations of Baryon Acoustic Oscillations I: Growth of Large-Scale Density Fluctuations

    CERN Document Server

    Takahashi, Ryuichi; Matsubara, Takahiko; Sugiyama, Naoshi; Kayo, Issha; Nishimichi, Takahiro; Shirata, Akihito; Taruya, Atsushi; Saito, Shun; Yahata, Kazuhiro; Suto, Yasushi

    2008-01-01

    We critically examine how well the evolution of large-scale density perturbations is followed in cosmological $N$-body simulations. We first run a large volume simulation and perform a mode-by-mode analysis in three-dimensional Fourier space. We show that the growth of large-scale fluctuations significantly deviates from linear theory predictions. The deviations are caused by {\\it nonlinear} coupling with a small number of modes at largest scales owing to finiteness of the simulation volume. We then develop an analytic model based on second-order perturbation theory to quantify the effect. Our model accurately reproduces the simulation results. For a single realization, the second-order effect appears typically as ``zig-zag'' patterns around the linear-theory prediction, which imprints artificial ``oscillations'' that lie on the real baryon-acoustic oscillations. Although an ensemble average of a number of realizations approaches the linear theory prediction, the dispersions of the realizations remain large e...

  9. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  10. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  11. Large-Scale Structure in Brane-Induced Gravity I. Perturbation Theory

    CERN Document Server

    Scoccimarro, Roman

    2009-01-01

    We study the growth of subhorizon perturbations in brane-induced gravity using perturbation theory. We solve for the linear evolution of perturbations taking advantage of the symmetry under gauge transformations along the extra-dimension to decouple the bulk equations in the quasistatic approximation, which we argue may be a better approximation at large scales than thought before. We then study the nonlinearities in the bulk and brane equations, concentrating on the workings of the Vainshtein mechanism by which the theory becomes general relativity (GR) at small scales. We show that at the level of the power spectrum, to a good approximation, the effect of nonlinearities in the modified gravity sector may be absorbed into a renormalization of the gravitational constant. Since weak lensing is entirely unaffected by the extra nonlinear physics in these theories, the modified gravity can be described in this approximation by a single function, an effective gravitational constant that depends on space and time. ...

  12. Large-scale-vortex dynamos in planar rotating convection

    CERN Document Server

    Guervilly, Céline; Jones, Chris A

    2016-01-01

    Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \\lesssim Rm \\lesssim 550$, with $Rm$ based on the box depth and the convective velocity), a large-scale (i.e. system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, out of phase with the oscillating amplitude of the large-scale vortex. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, ...

  13. Large-Scale Cosmic-Ray Anisotropy as a Probe of Interstellar Turbulence

    CERN Document Server

    Giacinti, Gwenael

    2016-01-01

    We calculate the large-scale cosmic-ray (CR) anisotropies predicted for a range of Goldreich-Sridhar (GS) and isotropic models of interstellar turbulence, and compare them with IceTop data. In general, the predicted CR anisotropy is not a pure dipole; the cold spots reported at 400 TeV and 2 PeV are consistent with a GS model that contains a smooth deficit of parallel-propagating waves and a broad resonance function, although some other possibilities cannot, as yet, be ruled out. In particular, isotropic fast magnetosonic wave turbulence can match the observations at high energy, but cannot accommodate an energy-dependence in the shape of the CR anisotropy. Our findings suggest that improved data on the large-scale CR anisotropy could provide a valuable probe of the properties - notably the power-spectrum - of the local interstellar turbulence.

  14. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  15. Acoustic Studies of the Large Scale Ocean Circulation

    Science.gov (United States)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  16. Human pescadillo induces large-scale chromatin unfolding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hao; FANG Yan; HUANG Cuifen; YANG Xiao; YE Qinong

    2005-01-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  17. Transport of Large Scale Poloidal Flux in Black Hole Accretion

    CERN Document Server

    Beckwith, Kris; Krolik, Julian H

    2009-01-01

    We perform a global, three-dimensional GRMHD simulation of an accretion torus embedded in a large scale vertical magnetic field orbiting a Schwarzschild black hole. This simulation investigates how a large scale vertical field evolves within a turbulent accretion disk and whether global magnetic field configurations suitable for launching jets and winds can develop. We identify a ``coronal mechanism'' of magnetic flux motion, which dominates the global flux evolution. In this coronal mechanism, magnetic stresses driven by orbital shear create large-scale half-loops of magnetic field that stretch radially inward and then reconnect, leading to discontinuous jumps in the location of magnetic flux. This mechanism is supplemented by a smaller amount of flux advection in the accretion flow proper. Because the black hole in this case does not rotate, the magnetic flux on the horizon determines the mean magnetic field strength in the funnel around the disk axis; this field strength is regulated by a combination of th...

  18. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  19. Large Scale Magnetohydrodynamic Dynamos from Cylindrical Differentially Rotating Flows

    CERN Document Server

    Ebrahimi, F

    2015-01-01

    For cylindrical differentially rotating plasmas threaded with a uniform vertical magnetic field, we study large-scale magnetic field generation from finite amplitude perturbations using analytic theory and direct numerical simulations. Analytically, we impose helical fluctuations, a seed field, and a background flow and use quasi-linear theory for a single mode. The predicted large-scale field growth agrees with numerical simulations in which the magnetorotational instability (MRI) arises naturally. The vertically and azimuthally averaged toroidal field is generated by a fluctuation-induced EMF that depends on differential rotation. Given fluctuations, the method also predicts large-scale field growth for MRI-stable rotation profiles and flows with no rotation but shear.

  20. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...

  1. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  2. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  3. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Morteza Hajizadeh-Oghaz; Reza Shoja Razavi; Mohammadreza Loghman Estarki

    2014-08-01

    Yttria–stabilized zirconia nanopowders were synthesized on a relatively large scale using Pechini method. In the present paper, nearly spherical yttria-stabilized zirconia nanopowders with tetragonal structure were synthesized by Pechini process from zirconium oxynitrate hexahydrate, yttrium nitrate, citric acid and ethylene glycol. The phase and structural analyses were accomplished by X-ray diffraction; morphological analysis was carried out by field emission scanning electron microscopy and transmission electron microscopy. The results revealed nearly spherical yttria–stabilized zirconia powder with tetragonal crystal structure and chemical purity of 99.1% by inductively coupled plasma optical emission spectroscopy on a large scale.

  4. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin; Li

    2001-01-01

    This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.……

  5. Statistical equilibria of large scales in dissipative hydrodynamic turbulence

    CERN Document Server

    Dallas, Vassilios; Alexakis, Alexandros

    2015-01-01

    We present a numerical study of the statistical properties of three-dimensional dissipative turbulent flows at scales larger than the forcing scale. Our results indicate that the large scale flow can be described to a large degree by the truncated Euler equations with the predictions of the zero flux solutions given by absolute equilibrium theory, both for helical and non-helical flows. Thus, the functional shape of the large scale spectra can be predicted provided that scales sufficiently larger than the forcing length scale but also sufficiently smaller than the box size are examined. Deviations from the predictions of absolute equilibrium are discussed.

  6. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  7. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  8. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  9. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...

  10. The fractal octahedron network of the large scale structure

    CERN Document Server

    Battaner, E

    1998-01-01

    In a previous article, we have proposed that the large scale structure network generated by large scale magnetic fields could consist of a network of octahedra only contacting at their vertexes. Assuming such a network could arise at different scales producing a fractal geometry, we study here its properties, and in particular how a sub-octahedron network can be inserted within an octahedron of the large network. We deduce that the scale of the fractal structure would range from $\\approx$100 Mpc, i.e. the scale of the deepest surveys, down to about 10 Mpc, as other smaller scale magnetic fields were probably destroyed in the radiation dominated Universe.

  11. Large-scale streaming motions and microwave background anisotropies

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))

    1989-12-01

    The minimal microwave background radiation is calculated on each angular scale implied by the existence of large-scale streaming motions. These minimal anisotropies, due to the Sachs-Wolfe effect, are obtained for different experiments, and give quite different results from those found in previous work. They are not in conflict with present theories of galaxy formation. Upper limits are imposed on the scale at which large-scale streaming motions can occur by extrapolating results from present double-beam-switching experiments. 17 refs.

  12. Distributed chaos tuned to large scale coherent motions in turbulence

    CERN Document Server

    Bershadskii, A

    2016-01-01

    It is shown, using direct numerical simulations and laboratory experiments data, that distributed chaos is often tuned to large scale coherent motions in anisotropic inhomogeneous turbulence. The examples considered are: fully developed turbulent boundary layer (range of coherence: $14 < y^{+} < 80$), turbulent thermal convection (in a horizontal cylinder), and Cuette-Taylor flow. Two ways of the tuning have been described: one via fundamental frequency (wavenumber) and another via subharmonic (period doubling). For the second way the large scale coherent motions are a natural component of distributed chaos. In all considered cases spontaneous breaking of space translational symmetry is accompanied by reflexional symmetry breaking.

  13. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  14. Optimal Dispatching of Large-scale Water Supply System

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper deals with the use of optimal control techniques in large-scale water distribution networks. According to the network characteristics and actual state of the water supply system in China, the implicit model, which may be solved by utilizing the hierarchical optimization method, is established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software tool has been developed successfully. The application of this model to the city of Shenyang (China) is compared to experiential strategy. The results of this study show that the developed model is a very promising optimization method to control the large-scale water supply systems.

  15. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...

  16. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin Li

    2001-01-01

    @@ This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.

  17. Fast paths in large-scale dynamic road networks

    CERN Document Server

    Nannicini, Giacomo; Barbier, Gilles; Krob, Daniel; Liberti, Leo

    2007-01-01

    Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.

  18. Pump schedules optimisation with pressure aspects in complex large-scale water distribution systems

    Directory of Open Access Journals (Sweden)

    P. Skworcow

    2014-06-01

    Full Text Available This paper considers optimisation of pump and valve schedules in complex large-scale water distribution networks (WDN, taking into account pressure aspects such as minimum service pressure and pressure-dependent leakage. An optimisation model is automatically generated in the GAMS language from a hydraulic model in the EPANET format and from additional files describing operational constraints, electricity tariffs and pump station configurations. The paper describes in details how each hydraulic component is modelled. To reduce the size of the optimisation problem the full hydraulic model is simplified using module reduction algorithm, while retaining the nonlinear characteristics of the model. Subsequently, a nonlinear programming solver CONOPT is used to solve the optimisation model, which is in the form of Nonlinear Programming with Discontinuous Derivatives (DNLP. The results produced by CONOPT are processed further by heuristic algorithms to generate integer solution. The proposed approached was tested on a large-scale WDN model provided in the EPANET format. The considered WDN included complex structures and interactions between pump stations. Solving of several scenarios considering different horizons, time steps, operational constraints, demand levels and topological changes demonstrated ability of the approach to automatically generate and solve optimisation problems for a variety of requirements.

  19. Pump schedules optimisation with pressure aspects in complex large-scale water distribution systems

    Directory of Open Access Journals (Sweden)

    P. Skworcow

    2014-02-01

    Full Text Available This paper considers optimisation of pump and valve schedules in complex large-scale water distribution networks (WDN, taking into account pressure aspects such as minimum service pressure and pressure-dependent leakage. An optimisation model is automatically generated in GAMS language from a hydraulic model in EPANET format and from additional files describing operational constraints, electricity tariffs and pump station configurations. The paper describes in details how each hydraulic component is modelled. To reduce the size of the optimisation problem the full hydraulic model is simplified using module reduction algorithm, while retaining the nonlinear characteristics of the model. Subsequently, a nonlinear programming solver CONOPT is used to solve the optimisation model, which is in the form of Nonlinear Programming with Discontinuous Derivatives (DNLP. The results produced by CONOPT are processed further by heuristic algorithms to generate integer solution. The proposed approached was tested on a large-scale WDN model provided in EPANET format. The considered WDN included complex structures and interactions between pump stations. Solving of several scenarios considering different horizons, time steps, operational constraints, demand levels and topological changes demonstrated ability of the approach to automatically generate and solve optimisation problems for variety of requirements.

  20. The Effective Field Theory of Large Scale Structures at two loops

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M.; Foreman, Simon; Green, Daniel; Senatore, Leonardo, E-mail: jjmc@stanford.edu, E-mail: sfore@stanford.edu, E-mail: drgreen@stanford.edu, E-mail: senatore@stanford.edu [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States)

    2014-07-01

    Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k{sub NL}. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k{sub NL}, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields. We find that the EFTofLSS prediction at two loops matches to percent accuracy the non-linear matter power spectrum at redshift zero up to k∼ 0.6 h Mpc{sup −1}, requiring just one unknown coefficient that needs to be fit to observations. Given that Standard Perturbation Theory stops converging at redshift zero at k∼ 0.1 h Mpc{sup −1}, our results demonstrate the possibility of accessing a factor of order 200 more dark matter quasi-linear modes than naively expected. If the remaining observational challenges to accessing these modes can be addressed with similar success, our results show that there is tremendous potential for large scale structure surveys to explore the primordial universe.

  1. Large-Scale Flow and Spiral Core Instability in Rayleigh-Benard Convection

    CERN Document Server

    Aranson, I S; Steinberg, V; Tsimring, L S; Aranson, Igor; Assenheimer, Michel; Steinberg, Victor; Tsimring, Lev S.

    1996-01-01

    The spiral core instability, observed in large aspect ratio Rayleigh-Benard convection, is studied numerically in the framework of the Swift-Hohenberg equation coupled to a large-scale flow. It is shown that the instability leads to non-trivial core dynamics and is driven by the self-generated vorticity. Moreover, the recently reported transition from spirals to hexagons near the core is shown to occur only in the presence of a non-variational nonlinearity, and is triggered by the spiral core instability. Qualitative agreement between the simulations and the experiments is demonstrated.

  2. GroFi: Large-scale fiber placement research facility

    Directory of Open Access Journals (Sweden)

    Christian Krombholz

    2016-03-01

    and processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high exibility of the research platform is achieved. This allows the investigation of new materials, technologies and processes on both, small coupons, but also large components such as wing covers or fuselage skins.

  3. Flexibility in design of large-scale methanol plants

    Institute of Scientific and Technical Information of China (English)

    Esben Lauge Sφrensen; Helge Holm-Larsen; Haldor Topsφe A/S

    2006-01-01

    This paper presents a cost effective design for large-scale methanol production. It is demonstrated how recent technological progress can be utilised to design a methanol plant,which is inexpensive and easy to operate, while at the same time very robust towards variations in feed-stock composition and product specifications.

  4. Large-scale search for dark-matter axions

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); McBride, J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Peng, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Rosenberg, L.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Xin, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Laveigne, J. [Florida Univ., Gainesville, FL (United States); Sikivie, P. [Florida Univ., Gainesville, FL (United States); Sullivan, N.S. [Florida Univ., Gainesville, FL (United States); Tanner, D.B. [Florida Univ., Gainesville, FL (United States); Moltz, D.M. [Lawrence Berkeley Lab., CA (United States); Powell, J. [Lawrence Berkeley Lab., CA (United States); Clarke, J. [Lawrence Berkeley Lab., CA (United States); Nezrick, F.A. [Fermi National Accelerator Lab., Batavia, IL (United States); Turner, M.S. [Fermi National Accelerator Lab., Batavia, IL (United States); Golubev, N.A. [Russian Academy of Sciences, Moscow (Russia); Kravchuk, L.V. [Russian Academy of Sciences, Moscow (Russia)

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  5. Temporal Variation of Large Scale Flows in the Solar Interior

    Indian Academy of Sciences (India)

    Sarbani Basu; H. M. Antia

    2000-09-01

    We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.

  6. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the homogeniza

  7. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    LI Fu-guang; LIU Chuan-liang; WU Zhi-xia; ZHANG Chao-jun; ZHANG Xue-yan

    2008-01-01

    @@ Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacteriurn turnefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than 1000 transgenie lines are selected from the transgenic plants with molecular assistant breeding and conventional breeding methods.

  8. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  9. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  10. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  11. Regeneration and propagation of reed grass for large-scale ...

    African Journals Online (AJOL)

    전서범

    2012-01-26

    Jan 26, 2012 ... containing different sucrose concentrations; this experiment found that 60 g L-1 ... All these uses of reeds require the large-scale rege- ... numbers of plant in a small space within a short time ... callus stock and grown in vitro were used in this study. .... presence of 4-FA were converted to friable and light-.

  12. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  13. Large-Scale Assessment and English Language Learners with Disabilities

    Science.gov (United States)

    Liu, Kristin K.; Ward, Jenna M.; Thurlow, Martha L.; Christensen, Laurene L.

    2017-01-01

    This article highlights a set of principles and guidelines, developed by a diverse group of specialists in the field, for appropriately including English language learners (ELLs) with disabilities in large-scale assessments. ELLs with disabilities make up roughly 9% of the rapidly increasing ELL population nationwide. In spite of the small overall…

  14. Large scale radial stability density of Hill's equation

    NARCIS (Netherlands)

    Broer, Henk; Levi, Mark; Simo, Carles

    2013-01-01

    This paper deals with large scale aspects of Hill's equation (sic) + (a + bp(t)) x = 0, where p is periodic with a fixed period. In particular, the interest is the asymptotic radial density of the stability domain in the (a, b)-plane. It turns out that this density changes discontinuously in a certa

  15. Water Implications of Large-Scale Land Acquisitions in Ghana

    Directory of Open Access Journals (Sweden)

    Timothy Olalekan Williams

    2012-06-01

    The paper offers recommendations which can help the government to achieve its stated objective of developing a "policy framework and guidelines for large-scale land acquisitions by both local and foreign investors for biofuels that will protect the interests of investors and the welfare of Ghanaian farmers and landowners".

  16. Evaluating Large-scale National Public Management Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Morten Balle

    This article explores differences and similarities between two evaluations of large-scale administrative reforms which were carried out in the 2000s: The evaluation of the Norwegian NAV reform (EVANAV) and the evaluation of the Danish Local Government Reform (LGR). We provide a comparative analys...

  17. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  18. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacterium tumefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than

  19. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  20. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E

    2010-01-01

    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...

  1. Ultra-Large-Scale Systems: Scale Changes Everything

    Science.gov (United States)

    2008-03-06

    Statistical Mechanics, Complexity Networks Are Everywhere Recurring “scale free” structure • internet & yeast protein structures Analogous dynamics...Design • Design Representation and Analysis • Assimilation • Determining and Managing Requirements 43 Ultra-Large-Scale Systems Linda Northrop: March

  2. The Role of Plausible Values in Large-Scale Surveys

    Science.gov (United States)

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  3. Large-scale data analysis using the Wigner function

    Science.gov (United States)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  4. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  5. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  6. High-Throughput, Large-Scale SNP Genotyping: Bioinformatics Considerations

    OpenAIRE

    Margetic, Nino

    2004-01-01

    In order to provide a high-throughput, large-scale genotyping facility at the national level we have developed a set of inter-dependent information systems. A combination of commercial, publicly-available and in-house developed tools links a series of data repositories based both on flat files and relational databases providing an almost complete semi-automated pipeline.

  7. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan

    2010-01-01

    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes assess

  8. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  9. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  10. How large-scale subsidence affects stratocumulus transitions (discussion paper)

    NARCIS (Netherlands)

    Van der Dussen, J.J.; De Roode, S.R.; Siebesma, A.P.

    2015-01-01

    Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP) budget from large-eddy simulation (LES) results of

  11. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  12. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    Science.gov (United States)

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  13. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  14. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  15. Primordial non-Gaussianity from the large scale structure

    CERN Document Server

    Desjacques, Vincent

    2010-01-01

    Primordial non-Gaussianity is a potentially powerful discriminant of the physical mechanisms that generated the cosmological fluctuations observed today. Any detection of non-Gaussianity would have profound implications for our understanding of cosmic structure formation. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large scale structure of the Universe.

  16. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured...

  17. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...

  18. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  19. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  20. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell B.

    2015-11-25

    The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel