WorldWideScience

Sample records for model parallel phase

  1. TWO PHASE FLOW SPLIT MODEL FOR PARALLEL CHANNELS

    African Journals Online (AJOL)

    Ifeanyichukwu Onwuka

    A model has been developed for the determination of two phase flow distributions between multiple parallel channels which ... transients, up to ten parallel flow paths, simple and complicated geometries, including the boilers of fossil steam generators and ..... The above model and numerical technique were programmed in ...

  2. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  3. Two Phase Flow Split Model for Parallel Channels | Iloeje | Nigerian ...

    African Journals Online (AJOL)

    A model has been developed for the determination of two phase flow distributions between multiple parallel channels which communicate between a common upper and a common lower plenum. It utilizes the requirement of equal plenum to plenum pressure drops through the channels, continuity equations at the lower ...

  4. A Parallel Computational Model for Multichannel Phase Unwrapping Problem

    Science.gov (United States)

    Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo

    2015-05-01

    In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.

  5. Parallel two-phase-flow-induced vibrations in fuel pin model

    International Nuclear Information System (INIS)

    Hara, Fumio; Yamashita, Tadashi

    1978-01-01

    This paper reports the experimental results of vibrations of a fuel pin model -herein meaning the essential form of a fuel pin from the standpoint of vibration- in a parallel air-and-water two-phase flow. The essential part of the experimental apparatus consisted of a flat elastic strip made of stainless steel, both ends of which were firmly supported in a circular channel conveying the two-phase fluid. Vibrational strain of the fuel pin model, pressure fluctuation of the two-phase flow and two-phase-flow void signals were measured. Statistical measures such as power spectral density, variance and correlation function were calculated. The authors obtained (1) the relation between variance of vibrational strain and two-phase-flow velocity, (2) the relation between variance of vibrational strain and two-phase-flow pressure fluctuation, (3) frequency characteristics of variance of vibrational strain against the dominant frequency of the two-phase-flow pressure fluctuation, and (4) frequency characteristics of variance of vibrational strain against the dominant frequency of two-phase-flow void signals. The authors conclude that there exist two kinds of excitation mechanisms in vibrations of a fuel pin model inserted in a parallel air-and-water two-phase flow; namely, (1) parametric excitation, which occurs when the fundamental natural frequency of the fuel pin model is related to the dominant travelling frequency of water slugs in the two-phase flow by the ratio 1/2, 1/1, 3/2 and so on; and (2) vibrational resonance, which occurs when the fundamental frequency coincides with the dominant frequency of the two-phase-flow pressure fluctuation. (auth.)

  6. Analytical modeling of two-phase flow instability in parallel boiling channels

    International Nuclear Information System (INIS)

    Ming, X.; Xuejun, C.; Mingyuan, Z.

    1990-01-01

    Research on two-phase flow instabilities is of great importance for power and nuclear industries. Parallel channel boiling systems are most commonly used, for instance, in steam generators and boilers. Thus, to study the stability of these systems is very useful, especially for safety consideration. This paper is concerned with the analytical modeling of density-wave instability in parallel vertical boiling channels with or without cross-connections. A mathematical model is developed to analyze the system stability in the frequency domain by means of multivariable control system theory. Based on drift-flux model, this analysis accounts for subcooled boiling, arbitrary heat flux distribution, turbulent mixing and arbitrary flow paths for cross-connection, and thermodynamic nonequilibrium in different flow regions, etc.. The drift-flux model conservation equations, together with other constitutive relations including those for cross-connections are integrated in subsections, then perturbed and linearized and Laplace-transformed around the system's steadystate operation parameters. Finally, the multivariable nodal equations are obtained and cast into matrix forms, from which the characteristic equations for evaluation of the system's stability are deduced. And the coupling effects between channels, and between channels and external loop can be considered

  7. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2013-01-01

    Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mat......Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops...... control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MicroGrid to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control...

  8. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Jafarpour, Saber [University of California, Santa Barbara; Bullo, Francesco [University of California, Santa Barbara; Dhople, Sairaj [University of Minnesota

    2017-08-31

    Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.

  9. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  10. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2011-01-01

    Power electronics based microgrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of three-phase VSIs are derived. The proposed voltage and current inner control loops and the mathematical models...... of the VSIs were based on the stationary reference frame. A hierarchical control for the paralleled VSI system was developed based on three levels. The primary control includes the droop method and the virtual impedance loops, in order to share active and reactive power. The secondary control restores...

  11. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Jafarpour, Saber [University of California Santa-Barbara; Bullo, Francesco [University of California Santa-Barbara; Dhople, Sairaj V. [University of Minnesota

    2017-08-21

    Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.

  12. Small-Signal Modeling, Analysis and Testing of Parallel Three-Phase-Inverters with A Novel Autonomous Current Sharing Controller

    DEFF Research Database (Denmark)

    Guan, Yajuan; Quintero, Juan Carlos Vasquez; Guerrero, Josep M.

    2015-01-01

    active or reactive power, instead it uses a virtual impedance loop and a SFR phase-locked loop. The small-signal model of the system was developed for the autonomous operation of inverter-based microgrid with the proposed controller. The developed model shows large stability margin and fast transient...... response of the system. This model can help identifying the origin of each of the modes and possible feedback signals for design of controllers to improve the system stability. Experimental results from two parallel 2.2 kVA inverters verify the effectiveness of the novel control approach.......A novel simple and effective autonomous currentsharing controller for parallel three-phase inverters is employed in this paper. The novel controller is able to endow to the system high speed response and precision in contrast to the conventional droop control as it does not require calculating any...

  13. Transfer function modeling of parallel connected two three-phase induction motor implementation using LabView platform

    DEFF Research Database (Denmark)

    Gunabalan, R.; Sanjeevikumar, P.; Blaabjerg, Frede

    2015-01-01

    This paper presents the transfer function modeling and stability analysis of two induction motors of same ratings and parameters connected in parallel. The induction motors are controlled by a single inverter and the entire drive system is modeled using transfer function in LabView. Further......, the software is used to perform the stability analysis of the parallel connected induction motor drive under unbalanced load conditions. It is very simple compared with the methods discussed so far to study the performance of the drive under unbalanced load conditions. Control design and simulation toolkits...... are used to model the drive system and to study the stability analysis. Simulation is done for various operating conditions and the stability investigation is performed for different load conditions and difference in stator and rotor resistances among the two motors....

  14. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  15. A model for dealing with parallel processes in supervision

    OpenAIRE

    Lilja Cajvert

    2011-01-01

    A model for dealing with parallel processes in supervision Supervision in social work is essential for successful outcomes when working with clients. In social work, unconscious difficulties may arise and similar difficulties may occur in supervision as parallel processes. In this article, the development of a practice-based model of supervision to deal with parallel processes in supervision is described. The model has six phases. In the first phase, the focus is on the supervisor’s inner ...

  16. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  17. Improving image quality of parallel phase-shifting digital holography

    International Nuclear Information System (INIS)

    Awatsuji, Yasuhiro; Tahara, Tatsuki; Kaneko, Atsushi; Koyama, Takamasa; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2008-01-01

    The authors propose parallel two-step phase-shifting digital holography to improve the image quality of parallel phase-shifting digital holography. The proposed technique can increase the effective number of pixels of hologram twice in comparison to the conventional parallel four-step technique. The increase of the number of pixels makes it possible to improve the image quality of the reconstructed image of the parallel phase-shifting digital holography. Numerical simulation and preliminary experiment of the proposed technique were conducted and the effectiveness of the technique was confirmed. The proposed technique is more practical than the conventional parallel phase-shifting digital holography, because the composition of the digital holographic system based on the proposed technique is simpler.

  18. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  19. Research on Parallel Three Phase PWM Converters base on RTDS

    Science.gov (United States)

    Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun

    2018-01-01

    Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.

  20. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  1. GPGPU Parallel SPIN Model Checker

    Data.gov (United States)

    National Aeronautics and Space Administration — Model Checking is a powerful technique used to verify that a system does not violate its intended behavior. While this is very useful in proving the robustness of a...

  2. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  3. Parallel models of associative memory

    CERN Document Server

    Hinton, Geoffrey E

    2014-01-01

    This update of the 1981 classic on neural networks includes new commentaries by the authors that show how the original ideas are related to subsequent developments. As researchers continue to uncover ways of applying the complex information processing abilities of neural networks, they give these models an exciting future which may well involve revolutionary developments in understanding the brain and the mind -- developments that may allow researchers to build adaptive intelligent machines. The original chapters show where the ideas came from and the new commentaries show where they are going

  4. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  5. A parallel computational model for GATE simulations.

    Science.gov (United States)

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Parallel phase-shifting digital holography using spectral estimation technique.

    Science.gov (United States)

    Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu

    2014-09-20

    We propose a parallel phase-shifting digital holography using a spectral estimation technique, which enables the instantaneous acquisition of spectral information and three-dimensional (3D) information of a moving object. In this technique, an interference fringe image that contains six holograms with two phase shifts for three laser lines, such as red, green, and blue, is recorded by a space-division multiplexing method with single-shot exposure. The 3D monochrome images of these three laser lines are numerically reconstructed by a computer and used to estimate the spectral reflectance distribution of object using a spectral estimation technique. Preliminary experiments demonstrate the validity of the proposed technique.

  7. A Scalable Prescriptive Parallel Debugging Model

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Quarfot Nielsen, Niklas; Lee, Gregory L.

    2015-01-01

    Debugging is a critical step in the development of any parallel program. However, the traditional interactive debugging model, where users manually step through code and inspect their application, does not scale well even for current supercomputers due its centralized nature. While lightweight...

  8. Effects of Parallel Channel Interactions on Two-Phase Flow Split in ...

    African Journals Online (AJOL)

    The tests would aid the development of a realistic transient computer model for tracking the distribution of two-phase flows into the multiple parallel channels of a Nuclear Reactor, during Loss of Coolant Accidents (LOCA), and were performed at the General Electric Nuclear Energy Division Laboratory, California. The test ...

  9. Sucrose and KF quenching system for solution phase parallel synthesis.

    Science.gov (United States)

    Chavan, Sunil; Watpade, Rahul; Toche, Raghunath

    2016-01-01

    The KF, sucrose (table sugar) exploited as quenching system in solution phase parallel synthesis. Excess of electrophiles were covalently trapped with hydroxyl functionality of sucrose and due to polar nature of sucrose derivative was solubilize in water. Potassium fluoride used to convert various excess electrophilic reagents such as acid chlorides, sulfonyl chlorides, isocyanates to corresponding fluorides, which are less susceptible for hydrolysis and subsequently sucrose traps these fluorides and dissolves them in water thus removing them from reaction mixture. Various excess electrophilic reagents such as acid chlorides, sulfonyl chlorides, and isocyanates were quenched successfully to give pure products in excellent yields.

  10. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  11. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  12. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  13. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  14. Fast phase processing in off-axis holography by CUDA including parallel phase unwrapping.

    Science.gov (United States)

    Backoach, Ohad; Kariv, Saar; Girshovitz, Pinhas; Shaked, Natan T

    2016-02-22

    We present parallel processing implementation for rapid extraction of the quantitative phase maps from off-axis holograms on the Graphics Processing Unit (GPU) of the computer using computer unified device architecture (CUDA) programming. To obtain efficient implementation, we parallelized both the wrapped phase map extraction algorithm and the two-dimensional phase unwrapping algorithm. In contrast to previous implementations, we utilized unweighted least squares phase unwrapping algorithm that better suits parallelism. We compared the proposed algorithm run times on the CPU and the GPU of the computer for various sizes of off-axis holograms. Using the GPU implementation, we extracted the unwrapped phase maps from the recorded off-axis holograms at 35 frames per second (fps) for 4 mega pixel holograms, and at 129 fps for 1 mega pixel holograms, which presents the fastest processing framerates obtained so far, to the best of our knowledge. We then used common-path off-axis interferometric imaging to quantitatively capture the phase maps of a micro-organism with rapid flagellum movements.

  15. Efficient Parallel Algorithms for Landscape Evolution Modelling

    Science.gov (United States)

    Moresi, L. N.; Mather, B.; Beucher, R.

    2017-12-01

    Landscape erosion and the deposition of sediments by river systems are strongly controlled bytopography, rainfall patterns, and the susceptibility of the basement to the action ofrunning water. It is well understood that each of these processes depends on the other, for example:topography results from active tectonic processes; deformation, metamorphosis andexhumation alter the competence of the basement; rainfall patterns depend on topography;uplift and subsidence in response to tectonic stress can be amplified by erosionand sediment deposition. We typically gain understanding of such coupled systems through forward models which capture theessential interactions of the various components and attempt parameterise those parts of the individual systemthat are unresolvable at the scale of the interaction. Here we address the problem of predicting erosion and deposition rates at a continental scalewith a resolution of tens to hundreds of metres in a dynamic, Lagrangian framework. This isa typical requirement for a code to interface with a mantle / lithosphere dynamics model anddemands an efficient, unstructured, parallel implementation. We address this through a very general algorithm that treats all parts of the landscape evolution equationsin sparse-matrix form including those for stream-flow accumulation, dam-filling and catchment determination. This givesus considerable flexibility in developing unstructured, parallel code, and in creating a modular packagethat can be configured by users to work at different temporal and spatial scales, but is also has potential advantagesin treating the non-linear parts of the problem in a general manner.

  16. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    Science.gov (United States)

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  17. A Parallel, High-Fidelity Radar Model

    Science.gov (United States)

    Horsley, M.; Fasenfest, B.

    2010-09-01

    Accurate modeling of Space Surveillance sensors is necessary for a variety of applications. Accurate models can be used to perform trade studies on sensor designs, locations, and scheduling. In addition, they can be used to predict system-level performance of the Space Surveillance Network to a collision or satellite break-up event. A high fidelity physics-based radar simulator has been developed for Space Surveillance applications. This simulator is designed in a modular fashion, where each module describes a particular physical process or radar function (radio wave propagation & scattering, waveform generation, noise sources, etc.) involved in simulating the radar and its environment. For each of these modules, multiple versions are available in order to meet the end-users needs and requirements. For instance, the radar simulator supports different atmospheric models in order to facilitate different methods of simulating refraction of the radar beam. The radar model also has the capability to use highly accurate radar cross sections generated by the method of moments, accelerated by the fast multipole method. To accelerate this computationally expensive model, it is parallelized using MPI. As a testing framework for the radar model, it is incorporated into the Testbed Environment for Space Situational Awareness (TESSA). TESSA is based on a flexible, scalable architecture, designed to exploit high-performance computing resources and allow physics-based simulation of the SSA enterprise. In addition to the radar models, TESSA includes hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, optical brightness calculations, optical system models, object detection algorithms, orbit determination algorithms, simulation analysis and visualization tools. Within this framework, observations and tracks generated by the new radar model are compared to results from a phenomenological radar model. In particular, the new model will be

  18. Phase distribution of nitrogen-water two-phase flow in parallel micro channels

    Science.gov (United States)

    Zhou, Mi; Wang, Shuangfeng; Zhou, You

    2017-04-01

    The present work experimentally investigated the phase splitting characteristics of gas-liquid two-phase flow passing through a horizontal-oriented micro-channel device with three parallel micro-channels. The hydraulic diameters of the header and the branch channels were 0.6 and 0.4 mm, respectively. Five different liquids, including de-ionized water and sodium dodecyl sulfate (SDS) solution with different concentration were employed. Different from water, the surface tension of SDS solution applied in this work decreased with the increment of mass concentration. Through series of visual experiments, it was found that the added SDS surfactant could obviously facilitate the two-phase flow through the parallel micro channels while SDS solution with low concentration would lead to an inevitable blockage of partial outlet branches. Experimental results revealed that the two phase distribution characteristics depended highly on the inlet flow patterns and the outlet branch numbers. To be specific, at the inlet of slug flow, a large amount of gas preferred flowing into the middle branch channel while the first branch was filled with liquid. However, when the inlet flow pattern was shifted to annular flow, all of the gas passed through the second and the last branches, with a little proportion of liquid flowing into the first channel. By comparison with the experimental results obtained from a microchannel device with five parallel micro-T channels, uneven distribution of the two phase can be markedly noticed in our present work.

  19. A Parallel Lattice Boltzmann Model of a Carotid Artery

    Science.gov (United States)

    Boyd, J.; Ryan, S. J.; Buick, J. M.

    2008-11-01

    A parallel implementation of the lattice Boltzmann model is considered for a three dimensional model of the carotid artery. The computational method and its parallel implementation are described. The performance of the parallel implementation on a Beowulf cluster is presented, as are preliminary hemodynamic results.

  20. Accurate modeling of parallel scientific computations

    Science.gov (United States)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  1. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  2. Parallel Computing for Terrestrial Ecosystem Carbon Modeling

    International Nuclear Information System (INIS)

    Wang, Dali; Post, Wilfred M.; Ricciuto, Daniel M.; Berry, Michael

    2011-01-01

    Terrestrial ecosystems are a primary component of research on global environmental change. Observational and modeling research on terrestrial ecosystems at the global scale, however, has lagged behind their counterparts for oceanic and atmospheric systems, largely because the unique challenges associated with the tremendous diversity and complexity of terrestrial ecosystems. There are 8 major types of terrestrial ecosystem: tropical rain forest, savannas, deserts, temperate grassland, deciduous forest, coniferous forest, tundra, and chaparral. The carbon cycle is an important mechanism in the coupling of terrestrial ecosystems with climate through biological fluxes of CO 2 . The influence of terrestrial ecosystems on atmospheric CO 2 can be modeled via several means at different timescales. Important processes include plant dynamics, change in land use, as well as ecosystem biogeography. Over the past several decades, many terrestrial ecosystem models (see the 'Model developments' section) have been developed to understand the interactions between terrestrial carbon storage and CO 2 concentration in the atmosphere, as well as the consequences of these interactions. Early TECMs generally adapted simple box-flow exchange models, in which photosynthetic CO 2 uptake and respiratory CO 2 release are simulated in an empirical manner with a small number of vegetation and soil carbon pools. Demands on kinds and amount of information required from global TECMs have grown. Recently, along with the rapid development of parallel computing, spatially explicit TECMs with detailed process based representations of carbon dynamics become attractive, because those models can readily incorporate a variety of additional ecosystem processes (such as dispersal, establishment, growth, mortality etc.) and environmental factors (such as landscape position, pest populations, disturbances, resource manipulations, etc.), and provide information to frame policy options for climate change

  3. Exploitation of Parallelism in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Baer, F.; Tribbia, J.J.; Williamson, D.L.

    1999-03-01

    The US Department of Energy (DOE), through its CHAMMP initiative, hopes to develop the capability to make meaningful regional climate forecasts on time scales exceeding a decade, such capability to be based on numerical prediction type models. We propose research to contribute to each of the specific items enumerated in the CHAMMP announcement (Notice 91-3); i.e., to consider theoretical limits to prediction of climate and climate change on appropriate time scales, to develop new mathematical techniques to utilize massively parallel processors (MPP), to actually utilize MPPs as a research tool, and to develop improved representations of some processes essential to climate prediction. In particular, our goals are to: (1) Reconfigure the prediction equations such that the time iteration process can be compressed by use of MMP architecture, and to develop appropriate algorithms. (2) Develop local subgrid scale models which can provide time and space dependent parameterization for a state- of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics. (3) Capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. By careful choice of initial states, many realizations of the climate system can be determined concurrently and more realistic assessments of the climate prediction can be made in a realistic time frame. To explore these initiatives, we will exploit all available computing technology, and in particular MPP machines. We anticipate that significant improvements in modeling of climate on the decadal and longer time scales for regional space scales will result from our efforts.

  4. NonLinear Parallel OPtimization Tool, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The technological advancement proposed is a novel large-scale Noninear Parallel OPtimization Tool (NLPAROPT). This software package will eliminate the computational...

  5. Parallel Nonlinear Optimization for Astrodynamic Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — CU Aerospace proposes the development of a new parallel nonlinear program (NLP) solver software package. NLPs allow the solution of complex optimization problems,...

  6. Visual Interfaces for Parallel Simulations (VIPS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Configuring the 3D geometry and physics of large scale parallel physics simulations is increasingly complex. Given the investment in time and effort to run these...

  7. Large change in voltage at phase reversal improves biphasic defibrillation thresholds. Parallel-series mode switching.

    Science.gov (United States)

    Yamanouchi, Y; Mowrey, K A; Nadzam, G R; Hills, D G; Kroll, M W; Brewer, J E; Donohoo, A M; Wilkoff, B L; Tchou, P J

    1996-10-01

    Multiple factors contribute to an improved defibrillation threshold of biphasic shocks. The leading-edge voltage of the second phase may be an important factor in reducing the defibrillation threshold. We tested two experimental biphasic waveforms with large voltage changes at phase reversal. The phase 2 leading-edge voltage was twice the phase 1 trailing-edge voltage. This large voltage change was achieved by switching two capacitors from parallel to series mode at phase reversal. Two capacitors were tested (60/15 microfarads [microF] and 90/22.5 microF) and compared with two control biphasic waveforms for which the phase 1 trailing-edge voltage equaled the phase 2 leading-edge voltage. The control waveforms were incorporated into clinical (135/135 microF) or investigational devices (90/90 microF). Defibrillation threshold parameters were evaluated in eight anesthetized pigs by use of a nonthoracotomy transvenous lead to a can electrode system. The stored energy at the defibrillation threshold (ion joules) was 8.2 +/- 1.5 for 60/15 microF (P voltage changes at phase reversal caused by parallel-series mode switching appeared to improve the ventricular defibrillation threshold in a pig model compared with a currently available biphasic waveform. The 60/15-microF capacitor performed as well as the 90/ 22.5-microF capacitor in the experimental waveform. Thus, smaller capacitors may allow reduction in device size without sacrificing defibrillation threshold energy requirements.

  8. A model for dealing with parallel processes in supervision

    Directory of Open Access Journals (Sweden)

    Lilja Cajvert

    2011-03-01

    Supervision in social work is essential for successful outcomes when working with clients. In social work, unconscious difficulties may arise and similar difficulties may occur in supervision as parallel processes. In this article, the development of a practice-based model of supervision to deal with parallel processes in supervision is described. The model has six phases. In the first phase, the focus is on the supervisor’s inner world, his/her own reflections and observations. In the second phase, the supervision situation is “frozen”, and the supervisees are invited to join the supervisor in taking a meta-perspective on the current situation of supervision. The focus in the third phase is on the inner world of all the group members as well as the visualization and identification of reflections and feelings that arose during the supervision process. Phase four focuses on the supervisee who presented a case, and in phase five the focus shifts to the common understanding and theorization of the supervision process as well as the definition and identification of possible parallel processes. In the final phase, the supervisee, with the assistance of the supervisor and other members of the group, develops a solution and determines how to proceed with the client in treatment. This article uses phenomenological concepts to provide a theoretical framework for the supervision model. Phenomenological reduction is an important approach to examine and to externalize and visualize the inner words of the supervisor and supervisees. Een model voor het hanteren van parallelle processen tijdens supervisie Om succesvol te zijn in de hulpverlening aan cliënten, is supervisie cruciaal in het sociaal werk. Tijdens de hulpverlening kunnen impliciete moeilijkheden de kop opsteken en soortgelijke moeilijkheden duiken soms ook op tijdens supervisie. Dit worden parallelle processen genoemd. Dit artikel beschrijft een op praktijkervaringen gebaseerd model om dergelijke parallelle

  9. Parallel power electronics filters in three-phase four-wire systems principle, control and design

    CERN Document Server

    Wong, Man-Chung; Lam, Chi-Seng

    2016-01-01

    This book describes parallel power electronic filters for 3-phase 4-wire systems, focusing on the control, design and system operation. It presents the basics of power-electronics techniques applied in power systems as well as the advanced techniques in controlling, implementing and designing parallel power electronics converters. The power-quality compensation has been achieved using active filters and hybrid filters, and circuit models, control principles and operational practice problems have been verified by principle study, simulation and experimental results. The state-of-the-art research findings were mainly developed by a team at the University of Macau. Offering background information and related novel techniques, this book is a valuable resource for electrical engineers and researchers wanting to work on energy saving using power-quality compensators or renewable energy power electronics systems. .

  10. Graph Partitioning Models for Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  11. Modeling of liquid phases

    CERN Document Server

    Soustelle, Michel

    2015-01-01

    This book is part of a set of books which offers advanced students successive characterization tool phases, the study of all types of phase (liquid, gas and solid, pure or multi-component), process engineering, chemical and electrochemical equilibria, and the properties of surfaces and phases of small sizes. Macroscopic and microscopic models are in turn covered with a constant correlation between the two scales. Particular attention has been given to the rigor of mathematical developments. This second volume in the set is devoted to the study of liquid phases.

  12. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  13. Inductively Modeling Parallel, Normal, and Frictional Forces

    Science.gov (United States)

    Wyrembeck, Edward P.

    2005-02-01

    This year, instead of resolving the weight mg of an object resting on an incline into force components parallel and perpendicular to the surface of the incline, I asked my students to actually measure these forces at various angles of inclination and graph the data. I wanted my students to inductively discover mg sin θ and mg cos θ, and to use these graphs to confront the passive nature of the static frictional force. I believe the graphs themselves are very powerful conceptual tools that are often never discovered and used by students who only learn to use equations at specific angles to solve specific quantitative problems.

  14. Requirements and Problems in Parallel Model Development at DWD

    Directory of Open Access Journals (Sweden)

    Ulrich Schäattler

    2000-01-01

    Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.

  15. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  16. Parallelized Radiative Transport and Phase Space Distributions in Heavy Ion Collisions

    Science.gov (United States)

    Damodaran, Mridula

    Numerical solutions of the Boltzmann transport equation (BTE) present a framework for modeling non-equilibrium dynamics in heavy ion collisions. However, the computational power required to solve the seven-dimensional integro-differential equation reaches impractical levels for realistic, high-statistics simulations involving radiative 2 to 3 and 3 to 2 scattering processes with sequential (single-processor) algorithms. This thesis presents a new parallelized MPC/Grid code that was developed to enable such simulations. The code was tested extensively for correctness, and speedups of up to about 30x were seen relative to single-processor execution. The parallelized code was then used in a study that required high-statistic simulations, to address the ambiguity in the conversion from a fluid dynamical description to a particle description of a system. Such conversion is necessary in all comparisons of hydrodynamic simulation results to experimental data. Four existing fluid-to-particle conversion models for shear viscous fluids were assessed based on their ability to reconstruct, using hydrodynamic variables alone, the full transport phase space density for a massless one-component gas undergoing 2 to 2 scatterings in a 0+1D boost-invariant Bjorken scenario. Besides establishing the regions of validity of the four models, novel improvements are proposed that greatly increase the reconstruction accuracy of these models (by about 10x relative to the most commonly used model). Analytical simplifications of the BTE in the near-free-streaming regime are also presented, in order to gain insight into the functional form of phase space densities in the presence of interactions. These will enable the construction of yet more accurate, theoretically well-founded fluid-to-particle conversion models in the future.

  17. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  18. Parallelization of the NASA Goddard Cumulus Ensemble Model for Massively Parallel Computing

    Directory of Open Access Journals (Sweden)

    Hann-Ming Henry Juang

    2007-01-01

    Full Text Available Massively parallel computing, using a message passing interface (MPI, has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE model. The implementation uses the domainresemble concept to design a code structure for both the whole domain and sub-domains after decomposition. Instead of inserting a group of MPI related statements into the model routine, these statements are packed into a single routine. In other words, only a single call statement to the model code is utilized once in a place, thus there is minimal impact on the original code. Therefore, the model is easily modified and/or managed by the model developers and/or users, who have little knowledge of massively parallel computing.

  19. Static Stiffness Modeling of Parallel Kinematics Machine Tool Joints

    OpenAIRE

    O. K. Akmaev; B. A. Enikeev; A. I. Nigmatullin

    2015-01-01

    The possible variants of an original parallel kinematics machine-tool structure are explored in this article. A new Hooke's universal joint design based on needle roller bearings with the ability of a preload setting is proposed. The bearing stiffness modeling is carried out using a variety of methods. The elastic deformation modeling of a Hook’s joint and a spherical rolling joint have been developed to assess the possibility of using these joints in machine tools with parallel k...

  20. The research of parallel-coupled linear-phase superconducting filter

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Tianliang; Zhou, Liguo; Yang, Kai, E-mail: kyang@uestc.edu.cn; Luo, Chao; Jiang, Mingyan; Dang, Wei; Ren, Xiangyang

    2015-12-15

    Highlights: • Parallel-connected linear phase filter can be achieved when the group delays of sub-networks compensate each other. • We give the coupling and routing diagrams of four linear phase filters with self-synthesized coupling matrixes, and verified the correctness of theory data and the feasibility of the circuit design. • There are a variety of topological coupling and routing diagrams for a same order filter. • We give a reasonable arrangement of design steps for high-order parallel-coupled linear phase filter. - Abstract: This paper presents a research on the mechanism of a linear phase filter constructed with parallel-connected sub-networks, considering that linear phase characteristic of a filter can be achieved when the group delays of sub-networks compensate each other. This paper also gives several coupling and routing diagrams of linear phase filters with different parallel-connected networks, and then the coupling matrixes of three 8-order filters and one 10-order filter are synthesized. One of the coupling matrixes is utilized to design a 8-order parallel-connected network high temperature superconducting (HTS) linear phase filter with two pairs of transmission zeros, so as to verify the correctness of theory data and the feasibility of the circuit design for the proposed 8-order and higher order parallel-connected network linear phase filter. The HTS linear phase filter is designed on YBCO/LaAlO{sub 3}/YBCO superconducting substrate, at 77 K, the measured center frequency is 2000 MHz with a bandwidth of 30 MHz, the insertion loss is less than 0.3 dB and the reflection is better than −12.5 dB in passband. The group delay is less than ±5 ns over the 60% passband, which shows that the filter has a good linear phase characteristic.

  1. Efficient Parallel Execution of Event-Driven Electromagnetic Hybrid Models

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S [ORNL; Karimabadi, Dr. Homa [SciberQuest Inc.; Fujimoto, Richard [ORNL

    2007-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform traditional time-stepped models, especially in simulations containing multiple timescales. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that operate at timescales that differ by orders of magnitude. In contrast to time-stepped simulation which requires tightly coupled updates to almost the entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, in contrast to relative ease of parallelization of time-stepped codes, the parallelization of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work on parallelization was limited in scalability and runtime performance due to such challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel execution performance. The mapping of the model to simulation processes is optimized via aggregation techniques, and the parallel runtime engine is optimized for communication and memory efficiency. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms.

  2. Generalized Analytical Program of Thyristor Phase Control Circuit with Series and Parallel Resonance Load

    OpenAIRE

    Nakanishi, Sen-ichiro; Ishida, Hideaki; Himei, Toyoji

    1981-01-01

    The systematic analytical method is reqUired for the ac phase control circuit by means of an inverse parallel thyristor pair which has a series and parallel L-C resonant load, because the phase control action causes abnormal and interesting phenomena, such as an extreme increase of voltage and current, an unique increase and decrease of contained higher harmonics, and a wide variation of power factor, etc. In this paper, the program for the analysis of the thyristor phase control circuit with...

  3. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    -flattening execution strategy, comes at the price of potentially prohibitive space usage in the common case of computations with an excess of available parallelism, such as dense-matrix multiplication. We present a simple nested data-parallel functional language and associated cost semantics that retains NESL......'s intuitive work--depth model for time complexity, but also allows highly parallel computations to be expressed in a space-efficient way, in the sense that memory usage on a single (or a few) processors is of the same order as for a sequential formulation of the algorithm, and in general scales smoothly......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  4. Modeling and Control of Primary Parallel Isolated Boost Converter

    DEFF Research Database (Denmark)

    Mira Albert, Maria del Carmen; Hernandez Botella, Juan Carlos; Sen, Gökhan

    2012-01-01

    In this paper state space modeling and closed loop controlled operation have been presented for primary parallel isolated boost converter (PPIBC) topology as a battery charging unit. Parasitic resistances have been included to have an accurate dynamic model. The accuracy of the model has been...

  5. Phase-conjugate interferometer to estimate refractive index and thickness of transparent plane parallel plates

    Energy Technology Data Exchange (ETDEWEB)

    Pastrana-Sanchez, R.; Rodriguez-Zurita, G.; Vazquez-Castillo, J. F. [Benemerita Universidad Autonoma de Puebla, Puebla (Mexico)

    2001-04-01

    A technique to estimate the refractive index and thickness of homogeneous plane parallel dielectric plates is proposed using a phase-conjugate interferometer, in which counting of interference fringes is employed. The light beam impinges a tilted plate before it enters a phase-conjugate interferometer, and a count of the fringes passing through a given reference at the observing plane gives the phase changes as a function of tilting angle. The obtained data is fitted to a mathematical model, which leads to the determination of both refractive index and thickness simultaneously. In this letter, experimental data from two interferometers are also discussed for comparison. One with an externally-pumped phase-conjugate mirror achieved with a BSO photorefractive crystal and another one with conventional mirrors. Results show that the phase sensitivity of the phase-conjugate interferometer is not simply twice the corresponding sensitivity of the conventional version. [Spanish] Se propone una tecnica para medir indices de refraccion y espesores de placas dielectricas plano paralelas homogeneas empleando un interferometro con fase conjugada, en el cual se usa el conteo de franjas. El haz luminoso incide en una placa inclinada bajo inspeccion antes de entrar en un interferometro equipado con un espejo conjugador de fase, y se realiza un conteo de las franjas que pasan por determinada referencia en el plano de observacion, proporcionando los cambios de fase en funcion del angulo de inclinacion. Los datos obtenidos se ajustan a un modelo, el cual conduce a la determinacion, tanto del indice de refraccion como del espesor, simultaneamente. En este trabajo se discuten datos experimentales provenientes de dos interferometros para su comparacion. Uno de ellos tiene un espejo conjugador basado en un cristal BSO fotorrefractivo, mientras que el otro es una variante con espejos convencionales. Se muestra que la sensibilidad de fase del interferometro con conjugador de fase no

  6. Petascale Hierarchical Modeling VIA Parallel Execution

    Energy Technology Data Exchange (ETDEWEB)

    Gelman, Andrew [Principal Investigator

    2014-04-14

    The research allows more effective model building. By allowing researchers to fit complex models to large datasets in a scalable manner, our algorithms and software enable more effective scientific research. In the new area of “big data,” it is often necessary to fit “big models” to adjust for systematic differences between sample and population. For this task, scalable and efficient model-fitting tools are needed, and these have been achieved with our new Hamiltonian Monte Carlo algorithm, the no-U-turn sampler, and our new C++ program, Stan. In layman’s terms, our research enables researchers to create improved mathematical modes for large and complex systems.

  7. Theoretical investigations on two-phase flow instability in parallel channels under axial non-uniform heating

    International Nuclear Information System (INIS)

    Lu, Xiaodong; Wu, Yingwei; Zhou, Linglan; Tian, Wenxi; Su, Guanghui; Qiu, Suizheng; Zhang, Hong

    2014-01-01

    Highlights: • We developed a model based on homogeneous flow model to analyze two-phase flow instability in parallel channels. • The influence of axial non-uniform heating on the system stability has been investigated. • Influences of various factors on system instability under cosine heat flux have been studied. • The system under top-peaked heat flux is the most stable system. - Abstract: Two-phase flow instability in parallel channels heated by axial non-uniform heat flux has been theoretically studied in this paper. The system control equations of parallel channels were established based on the homogeneous flow model in two-phase region. Semi-implicit finite-difference scheme and staggered mesh method were used to discretize the equations, and the difference equations were solved by chasing method. Cosine, bottom-peaked and top-peaked heat fluxes were used to study the influence of non-uniform heating on two-phase flow instability of the parallel channels system. The marginal stability boundaries (MSB) of parallel channels and three-dimensional instability spaces (or instability reefs) under different heat flux conditions have been obtained. Compared with axial uniform heating, axial non-uniform heating will affect the system stability. Cosine and bottom-peaked heat fluxes can destabilize the system stability in high inlet subcooling region, while the opposite effect can be found in low inlet subcooling region. However, top-peaked heat flux can enhance the system stability in the whole region. In addition, for cosine heat flux, increasing the system pressure or inlet resistance coefficient can strengthen the system stability, and increasing the heating power will destabilize the system stability. The influence of inlet subcooling number on the system stability is multi-valued under cosine heat flux

  8. Parallel Computing Characteristics of Two-Phase Thermal-Hydraulics code, CUPID

    International Nuclear Information System (INIS)

    Lee, Jae Ryong; Yoon, Han Young

    2013-01-01

    Parallelized CUPID code has proved to be able to reproduce multi-dimensional thermal hydraulic analysis by validating with various conceptual problems and experimental data. In this paper, the characteristics of the parallelized CUPID code were investigated. Both single- and two phase simulation are taken into account. Since the scalability of a parallel simulation is known to be better for fine mesh system, two types of mesh system are considered. In addition, the dependency of the preconditioner for matrix solver was also compared. The scalability for the single-phase flow is better than that for two-phase flow due to the less numbers of iterations for solving pressure matrix. The CUPID code was investigated the parallel performance in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the interface cells. As increasing the number of mesh, the scalability is improved. For a given mesh, single-phase flow simulation with diagonal preconditioner shows the best speedup. However, for the two-phase flow simulation, the ILU preconditioner is recommended since it reduces the overall simulation time

  9. Sequential and Parallel Attack Tree Modelling

    NARCIS (Netherlands)

    Arnold, Florian; Guck, Dennis; Kumar, Rajesh; Stoelinga, Mariëlle Ida Antoinette; Koornneef, Floor; van Gulijk, Coen

    The intricacy of socio-technical systems requires a careful planning and utilisation of security resources to ensure uninterrupted, secure and reliable services. Even though many studies have been conducted to understand and model the behaviour of a potential attacker, the detection of crucial

  10. Phase Field Modeling Using PetIGA

    KAUST Repository

    Vignal, Philippe

    2013-06-01

    Phase field modeling has become a widely used framework in the computational material science community. Its ability to model different problems by defining appropriate phase field parameters and relating it to a free energy functional makes it highly versatile. Thermodynamically consistent partial differential equations can then be generated by assuming dissipative dynamics, and setting up the problem as one of minimizing this free energy. The equations are nonetheless challenging to solve, and having a highly efficient and parallel framework to solve them is necessary. In this work, a brief review on phase field models is given, followed by a short analysis of the Phase Field Crystal Model solved with Isogeometric Analysis us- ing PetIGA. We end with an introduction to a new modeling concept, where free energy functions are built with a periodic equilibrium structure in mind.

  11. An Integrated Inductor For Parallel Interleaved Three-Phase Voltage Source Converters

    DEFF Research Database (Denmark)

    Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand; Teodorescu, Remus

    2016-01-01

    Three phase Voltage Source Converters (VSCs) are often connected in parallel to realize high current output converter system. The harmonic quality of the resultant switched output voltage can be improved by interleaving the carrier signals of these parallel connected VSCs. As a result, the line...... of the state-of-the-art filtering solution. The performance of the integrated inductor is also verified by the experimental measurements....

  12. Sharing of nonlinear load in parallel-connected three-phase converters

    DEFF Research Database (Denmark)

    Borup, Uffe; Blaabjerg, Frede; Enjeti, Prasad N.

    2001-01-01

    compensation are connected in parallel. Without the new solution, they are normally not able to distinguish the harmonic currents that flow to the load and harmonic currents that circulate between the converters. Analysis and experimental results on two 90-kVA 400-Hz converters in parallel are presented......In this paper, a new control method is presented which enables equal sharing of linear and nonlinear loads in three-phase power converters connected in parallel, without communication between the converters. The paper focuses on solving the problem that arises when two converters with harmonic....... The results show that both linear and nonlinear loads can be shared equally by the proposed concept....

  13. Independent slab-phase modulation combined with parallel imaging in bilateral breast MRI.

    Science.gov (United States)

    Han, Misung; Beatty, Philip J; Daniel, Bruce L; Hargreaves, Brian A

    2009-11-01

    Independent slab-phase modulation allows three-dimensional imaging of multiple volumes without encoding the space between volumes, thus reducing scan time. Parallel imaging further accelerates data acquisition by exploiting coil sensitivity differences between volumes. This work compared bilateral breast image quality from self-calibrated parallel imaging reconstruction methods such as modified sensitivity encoding, generalized autocalibrating partially parallel acquisitions and autocalibrated reconstruction for Cartesian sampling (ARC) for data with and without slab-phase modulation. A study showed an improvement of image quality by incorporating slab-phase modulation. Geometry factors measured from phantom images were more homogenous and lower on average when slab-phase modulation was used for both mSENSE and GRAPPA reconstructions. The resulting improved signal-to-noise ratio (SNR) was validated for in vivo images as well using ARC instead of GRAPPA, illustrating average SNR efficiency increases in mSENSE by 5% and ARC by 8% based on region of interest analysis. Furthermore, aliasing artifacts from mSENSE reconstruction were reduced when slab-phase modulation was used. Overall, slab-phase modulation with parallel imaging improved image quality and efficiency for 3D bilateral breast imaging. (c) 2009 Wiley-Liss, Inc.

  14. Study of error modeling in kinematic calibration of parallel manipulators

    Directory of Open Access Journals (Sweden)

    Liping Wang

    2016-10-01

    Full Text Available Error modeling is the foundation of a kinematic calibration which is a main approach to assure the accuracy of parallel manipulators. This article investigates the influence of error model on the kinematic calibration of parallel manipulators. Based on the coupling analysis between error parameters, an identifiability index for evaluating the error model is proposed. Taking a 3PRS parallel manipulator as an example, three error models with different values of identifiability index are given. With the same parameter identification, measurement, and compensation method, the computer simulations and prototype experiments of the kinematic calibration with each error model are performed. The simulation and experiment results show that the kinematic calibration using the error model with a bigger value of identifiability index can lead to a better accuracy of the manipulator. Then, an approach of error modeling is proposed to obtain a bigger value of identifiability index. The study of this article is useful for error modeling in kinematic calibration of other parallel manipulators.

  15. A new parallelization algorithm of ocean model with explicit scheme

    Science.gov (United States)

    Fu, X. D.

    2017-08-01

    This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.

  16. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  17. A one-dimensional heat transfer model for parallel-plate thermoacoustic heat exchangers.

    Science.gov (United States)

    de Jong, J A; Wijnant, Y H; de Boer, A

    2014-03-01

    A one-dimensional (1D) laminar oscillating flow heat transfer model is derived and applied to parallel-plate thermoacoustic heat exchangers. The model can be used to estimate the heat transfer from the solid wall to the acoustic medium, which is required for the heat input/output of thermoacoustic systems. The model is implementable in existing (quasi-)1D thermoacoustic codes, such as DeltaEC. Examples of generated results show good agreement with literature results. The model allows for arbitrary wave phasing; however, it is shown that the wave phasing does not significantly influence the heat transfer.

  18. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  19. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  20. Parallelization of elliptic solver for solving 1D Boussinesq model

    Science.gov (United States)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  1. A hybrid parallel framework for the cellular Potts model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).

  2. Static Stiffness Modeling of Parallel Kinematics Machine Tool Joints

    Directory of Open Access Journals (Sweden)

    O. K. Akmaev

    2015-09-01

    Full Text Available The possible variants of an original parallel kinematics machine-tool structure are explored in this article. A new Hooke's universal joint design based on needle roller bearings with the ability of a preload setting is proposed. The bearing stiffness modeling is carried out using a variety of methods. The elastic deformation modeling of a Hook’s joint and a spherical rolling joint have been developed to assess the possibility of using these joints in machine tools with parallel kinematics.

  3. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  4. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    challenges due to their highly nonlinear behaviors, thus, the parameter and performance analysis, especially the accuracy and stiness, are particularly important. Toward the requirements of robotic technology such as light weight, compactness, high accuracy and low energy consumption, utilizing optimization...... theory and virtual spring approach, a general kinetostatic model of the spherical parallel manipulators is developed and validated with Finite Element approach. This model is applied to the stiness analysis of a special spherical parallel manipulator with unlimited rolling motion and the obtained stiness...

  5. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  6. Stratified steady and unsteady two-phase flows between two parallel plates

    International Nuclear Information System (INIS)

    Sim, Woo Gun

    2006-01-01

    To understand fluid dynamic forces acting on a structure subjected to two-phase flow, it is essential to get detailed information about the characteristics of two-phase flow. Stratified steady and unsteady two-phase flows between two parallel plates have been studied to investigate the general characteristics of the flow related to flow-induced vibration. Based on the spectral collocation method, a numerical approach has been developed for the unsteady two-phase flow. The method is validated by comparing numerical result to analytical one given for a simple harmonic two-phase flow. The flow parameters for the steady two-phase flow, such as void fraction and two-phase frictional multiplier, are evaluated. The dynamic characteristics of the unsteady two-phase flow, including the void fraction effect on the complex unsteady pressure, are illustrated

  7. Comparison of phase-constrained parallel MRI approaches: Analogies and differences.

    Science.gov (United States)

    Blaimer, Martin; Heim, Marius; Neumann, Daniel; Jakob, Peter M; Kannengiesser, Stephan; Breuer, Felix A

    2016-03-01

    Phase-constrained parallel MRI approaches have the potential for significantly improving the image quality of accelerated MRI scans. The purpose of this study was to investigate the properties of two different phase-constrained parallel MRI formulations, namely the standard phase-constrained approach and the virtual conjugate coil (VCC) concept utilizing conjugate k-space symmetry. Both formulations were combined with image-domain algorithms (SENSE) and a mathematical analysis was performed. Furthermore, the VCC concept was combined with k-space algorithms (GRAPPA and ESPIRiT) for image reconstruction. In vivo experiments were conducted to illustrate analogies and differences between the individual methods. Furthermore, a simple method of improving the signal-to-noise ratio by modifying the sampling scheme was implemented. For SENSE, the VCC concept was mathematically equivalent to the standard phase-constrained formulation and therefore yielded identical results. In conjunction with k-space algorithms, the VCC concept provided more robust results when only a limited amount of calibration data were available. Additionally, VCC-GRAPPA reconstructed images provided spatial phase information with full resolution. Although both phase-constrained parallel MRI formulations are very similar conceptually, there exist important differences between image-domain and k-space domain reconstructions regarding the calibration robustness and the availability of high-resolution phase information. © 2015 Wiley Periodicals, Inc.

  8. The Extended Parallel Process Model: Illuminating the Gaps in Research

    Science.gov (United States)

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  9. A Probabilistic Approach to Symbolic Performance Modeling of Parallel Systems

    NARCIS (Netherlands)

    Gautama, H.

    2004-01-01

    Performance modeling plays a significant role in predicting the effects of a particular design choice or in diagnosing the cause for some observed performance behavior. Especially for complex systems such as parallel computer, typically, an intended performance cannot be achieved without recourse to

  10. mpdcm: A toolbox for massively parallel dynamic causal modeling.

    Science.gov (United States)

    Aponte, Eduardo A; Raman, Sudhir; Sengupta, Biswa; Penny, Will D; Stephan, Klaas E; Heinzle, Jakob

    2016-01-15

    Dynamic causal modeling (DCM) for fMRI is an established method for Bayesian system identification and inference on effective brain connectivity. DCM relies on a biophysical model that links hidden neuronal activity to measurable BOLD signals. Currently, biophysical simulations from DCM constitute a serious computational hindrance. Here, we present Massively Parallel Dynamic Causal Modeling (mpdcm), a toolbox designed to address this bottleneck. mpdcm delegates the generation of simulations from DCM's biophysical model to graphical processing units (GPUs). Simulations are generated in parallel by implementing a low storage explicit Runge-Kutta's scheme on a GPU architecture. mpdcm is publicly available under the GPLv3 license. We found that mpdcm efficiently generates large number of simulations without compromising their accuracy. As applications of mpdcm, we suggest two computationally expensive sampling algorithms: thermodynamic integration and parallel tempering. mpdcm is up to two orders of magnitude more efficient than the standard implementation in the software package SPM. Parallel tempering increases the mixing properties of the traditional Metropolis-Hastings algorithm at low computational cost given efficient, parallel simulations of a model. Future applications of DCM will likely require increasingly large computational resources, for example, when the likelihood landscape of a model is multimodal, or when implementing sampling methods for multi-subject analysis. Due to the wide availability of GPUs, algorithmic advances can be readily available in the absence of access to large computer grids, or when there is a lack of expertise to implement algorithms in such grids. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Space-bandwidth extension in parallel phase-shifting digital holography using a four-channel polarization-imaging camera.

    Science.gov (United States)

    Tahara, Tatsuki; Ito, Yasunori; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-07-15

    We propose a method for extending the space bandwidth (SBW) available for recording an object wave in parallel phase-shifting digital holography using a four-channel polarization-imaging camera. A linear spatial carrier of the reference wave is introduced to an optical setup of parallel four-step phase-shifting interferometry using a commercially available polarization-imaging camera that has four polarization-detection channels. Then a hologram required for parallel two-step phase shifting, which is a technique capable of recording the widest SBW in parallel phase shifting, can be obtained. The effectiveness of the proposed method was numerically and experimentally verified.

  12. Reconstruction of electron beam distribution in phase space by using parallel maximum entropy method

    International Nuclear Information System (INIS)

    Hajima, R.; Hirotsu, T.; Kondo, S.

    1997-01-01

    Reconstruction of electron beam distribution in six-dimensional phase space by tomographic approach is presented. Maximum entropy method (MENT) is applied to the reconstruction and compared with filtered back-projection. Finally, MENT is adapted to parallel computing environment with PVM. (orig.)

  13. Modeling and optimization of parallel and distributed embedded systems

    CERN Document Server

    Munir, Arslan; Ranka, Sanjay

    2016-01-01

    This book introduces the state-of-the-art in research in parallel and distributed embedded systems, which have been enabled by developments in silicon technology, micro-electro-mechanical systems (MEMS), wireless communications, computer networking, and digital electronics. These systems have diverse applications in domains including military and defense, medical, automotive, and unmanned autonomous vehicles. The emphasis of the book is on the modeling and optimization of emerging parallel and distributed embedded systems in relation to the three key design metrics of performance, power and dependability.

  14. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  15. Potts-model grain growth simulations: Parallel algorithms and applications

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P. [and others

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  16. Suppressing correlations in massively parallel simulations of lattice models

    Science.gov (United States)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  17. Fast robot kinematics modeling by using a parallel simulator (PSIM)

    International Nuclear Information System (INIS)

    El-Gazzar, H.M.; Ayad, N.M.A.

    2002-01-01

    High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done

  18. Term Structure Models with Parallel and Proportional Shifts

    DEFF Research Database (Denmark)

    Armerin, Frederik; Björk, Tomas; Astrup Jensen, Bjarne

    We investigate the possibility of an arbitrage free model for the term structure of interest rates where the yield curve only changes through a parallel shift. We consider HJM type forward rate models driven by a multidimensionalWiener process as well as by a general marked point process. Within...... this general framework we show that there does indeed exist a large variety of nontrivial parallel shift term structure models, and we also describe these in detail. We also show that there exists no nontrivial flat term structure model. The same analysis is repeated for the similar case, where the yield curve...... only changes through proportional shifts.Key words: bond market, term structure of interest rates, flat term structures....

  19. Dynamic model of a 3-DOF redundantly actuated parallel manipulator

    Directory of Open Access Journals (Sweden)

    Tiemin Li

    2016-09-01

    Full Text Available We investigate the dynamic mode of a 3-degree of freedom (DOF redundantly actuated parallel manipulator by taking the flexible deformation of the limbs into account. The dynamic model is derived using Newton–Euler formulation. Since the number of equations derived from the force and moment equilibrium of the parallel manipulator components is less than the number of unknown variables, the flexible deformation of the limbs is treated as an inequality constraint to find the solution of the dynamic model. The errors of moving platform caused by the flexible deformation of limbs are discussed, and a control strategy is given. To validate the model, the dynamic model is integrated with the control system and compared with the traditional method to minimize the normal driving forces.

  20. Parallelization of a hydrological model using the message passing interface

    Science.gov (United States)

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  1. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  2. A Programming Model for Massive Data Parallelism with Data Dependencies

    International Nuclear Information System (INIS)

    Cui, Xiaohui; Mueller, Frank; Potok, Thomas E.; Zhang, Yongpeng

    2009-01-01

    Accelerating processors can often be more cost and energy effective for a wide range of data-parallel computing problems than general-purpose processors. For graphics processor units (GPUs), this is particularly the case when program development is aided by environments such as NVIDIA s Compute Unified Device Architecture (CUDA), which dramatically reduces the gap between domain-specific architectures and general purpose programming. Nonetheless, general-purpose GPU (GPGPU) programming remains subject to several restrictions. Most significantly, the separation of host (CPU) and accelerator (GPU) address spaces requires explicit management of GPU memory resources, especially for massive data parallelism that well exceeds the memory capacity of GPUs. One solution to this problem is to transfer data between the GPU and host memories frequently. In this work, we investigate another approach. We run massively data-parallel applications on GPU clusters. We further propose a programming model for massive data parallelism with data dependencies for this scenario. Experience from micro benchmarks and real-world applications shows that our model provides not only ease of programming but also significant performance gains

  3. Prestack Parallel Modeling of Dispersive and Attenuative Medium

    Directory of Open Access Journals (Sweden)

    How-Wei Chen

    2006-01-01

    Full Text Available This study presents an efficient parallelized staggered grid pseudospectral method for 2-D viscoacoustic seismic waveform modeling that runs in a highperformance multi-processor computer and an in-house developed PC cluster. Parallel simulation permits several processors to be used for solving a single large problem with a high computation to communication ratio. Thus, parallelizing the serial scheme effectively reduces the computation time. Computational results indicate a reasonably consistent parallel performance when using different FFTs in pseudospectral computations. Meanwhile, a virtually perfect linear speedup can be achieved in a distributed- memory multi-processor environment. Effectiveness of the proposed algorithm is demonstrated using synthetic examples by simulating multiple shot gathers consistent with field coordinates. For dispersive and attenuating media, the propagating wavefield possesses the observable differences in waveform, amplitude and travel-times. The resulting effects on seismic signals, such as the decreased amplitude because of intrinsic Q and temporal shift because of physical dispersion phenomena, can be analyzed quantitatively. Anelastic effects become more visible owing to cumulative propagation effects. Field data application is presented in simulating OBS wide-angle seismic marine data for deep crustal structure study. The fine details of deep crustal velocity and attenuation structures in the survey area can be resolved by comparing simulated waveforms with observed seismograms recorded at various distances. Parallel performance is analyzed through speedup and efficiency for a variety of computing platforms. Effective parallel implementation requires numerous independent CPU intensive sub-jobs with low latency and high bandwidth inter-processor communication.

  4. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  5. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-03-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  6. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-11-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  7. Modeling and PDC fuzzy control of planar parallel robot

    Directory of Open Access Journals (Sweden)

    Benyamine Allouche

    2017-02-01

    Full Text Available Many works in the literature have studied the kinematical and dynamical issues of parallel robots. But it is still difficult to extend the vast control strategies to parallel mechanisms due to the complexity of the model-based control. This complexity is mainly caused by the presence of multiple closed kinematic chains, making the system naturally described by a set of differential–algebraic equations. The aim of this work is to control a two-degree-of-freedom parallel manipulator. A mechanical model based on differential–algebraic equations is given. The goal is to use the structural characteristics of the mechanical system to reduce the complexity of the nonlinear model. Therefore, a trajectory tracking control is achieved using the Takagi-Sugeno fuzzy model derived from the differential–algebraic equation forms and its linear matrix inequality constraints formulation. Simulation results show that the proposed approach based on differential–algebraic equations and Takagi-Sugeno fuzzy modeling leads to a better robustness against the structural uncertainties.

  8. Two-phase flow models

    International Nuclear Information System (INIS)

    Delaje, Dzh.

    1984-01-01

    General hypothesis used to simplify the equations, describing two-phase flows, are considered. Two-component and one-component models of two-phase flow, as well as Zuber and Findlay model for actual volumetric steam content, and Wallis model, describing the given phase rates, are presented. The conclusion is made, that the two-component model, in which values averaged in time are included, is applicable for the solving of three-dimensional tasks for unsteady two-phase flow. At the same time, using the two-component model, including values, averaged in space only one-dimensional tasks for unsteady two-phase flow can be solved

  9. A simple hyperbolic model for communication in parallel processing environments

    Science.gov (United States)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  10. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  11. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  12. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  13. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  14. Exploitation of parallelism in climate models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.

    2001-02-05

    This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.

  15. Construction of a digital elevation model: methods and parallelization

    International Nuclear Information System (INIS)

    Mazzoni, Christophe

    1995-01-01

    The aim of this work is to reduce the computation time needed to produce the Digital Elevation Models (DEM) by using a parallel machine. It is made in collaboration between the French 'Institut Geographique National' (IGN) and the Laboratoire d'Electronique de Technologie et d'Instrumentation (LETI) of the French Atomic Energy Commission (CEA). The IGN has developed a system which provides DEM that is used to produce topographic maps. The kernel of this system is the correlator, a software which automatically matches pairs of homologous points of a stereo-pair of photographs. Nevertheless the correlator is expensive In computing time. In order to reduce computation time and to produce the DEM with same accuracy that the actual system, we have parallelized the IGN's correlator on the OPENVISION system. This hardware solution uses a SIMD (Single Instruction Multiple Data) parallel machine SYMPATI-2, developed by the LETI that is involved in parallel architecture and image processing. Our analysis of the implementation has demonstrated the difficulty of efficient coupling between scalar and parallel structure. So we propose solutions to reinforce this coupling. In order to accelerate more the processing we evaluate SYMPHONIE, a SIMD calculator, successor of SYMPATI-2. On an other hand, we developed a multi-agent approach for what a MIMD (Multiple Instruction, Multiple Data) architecture is available. At last, we describe a Multi-SIMD architecture that conciliates our two approaches. This architecture offers a capacity to apprehend efficiently multi-level treatment image. It is flexible by its modularity, and its communication network supplies reliability that interest sensible systems. (author) [fr

  16. HPC parallel programming model for gyrokinetic MHD simulation

    International Nuclear Information System (INIS)

    Naitou, Hiroshi; Yamada, Yusuke; Tokuda, Shinji; Ishii, Yasutomo; Yagi, Masatoshi

    2011-01-01

    The 3-dimensional gyrokinetic PIC (particle-in-cell) code for MHD simulation, Gpic-MHD, was installed on SR16000 (“Plasma Simulator”), which is a scalar cluster system consisting of 8,192 logical cores. The Gpic-MHD code advances particle and field quantities in time. In order to distribute calculations over large number of logical cores, the total simulation domain in cylindrical geometry was broken up into N DD-r × N DD-z (number of radial decomposition times number of axial decomposition) small domains including approximately the same number of particles. The axial direction was uniformly decomposed, while the radial direction was non-uniformly decomposed. N RP replicas (copies) of each decomposed domain were used (“particle decomposition”). The hybrid parallelization model of multi-threads and multi-processes was employed: threads were parallelized by the auto-parallelization and N DD-r × N DD-z × N RP processes were parallelized by MPI (message-passing interface). The parallelization performance of Gpic-MHD was investigated for the medium size system of N r × N θ × N z = 1025 × 128 × 128 mesh with 4.196 or 8.192 billion particles. The highest speed for the fixed number of logical cores was obtained for two threads, the maximum number of N DD-z , and optimum combination of N DD-r and N RP . The observed optimum speeds demonstrated good scaling up to 8,192 logical cores. (author)

  17. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    Science.gov (United States)

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  18. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    Directory of Open Access Journals (Sweden)

    Yong Xia

    2015-01-01

    Full Text Available Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation and the other is the diffusion term of the monodomain model (partial differential equation. Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  19. A Hybrid Parallel Execution Model for Logic Based Requirement Specifications (Invited Paper

    Directory of Open Access Journals (Sweden)

    Jeffrey J. P. Tsai

    1999-05-01

    Full Text Available It is well known that undiscovered errors in a requirements specification is extremely expensive to be fixed when discovered in the software maintenance phase. Errors in the requirement phase can be reduced through the validation and verification of the requirements specification. Many logic-based requirements specification languages have been developed to achieve these goals. However, the execution and reasoning of a logic-based requirements specification can be very slow. An effective way to improve their performance is to execute and reason the logic-based requirements specification in parallel. In this paper, we present a hybrid model to facilitate the parallel execution of a logic-based requirements specification language. A logic-based specification is first applied by a data dependency analysis technique which can find all the mode combinations that exist within a specification clause. This mode information is used to support a novel hybrid parallel execution model, which combines both top-down and bottom-up evaluation strategies. This new execution model can find the failure in the deepest node of the search tree at the early stage of the evaluation, thus this new execution model can reduce the total number of nodes searched in the tree, the total processes needed to be generated, and the total communication channels needed in the search process. A simulator has been implemented to analyze the execution behavior of the new model. Experiments show significant improvement based on several criteria.

  20. Reversed phase parallel artificial membrane permeation assay for log P measurement

    Directory of Open Access Journals (Sweden)

    Zihao Song

    2016-03-01

    Full Text Available A reversed phase parallel artificial membrane permeation assay (RP-PAMPA was newly invented for log P measurement. An oil/water/oil sandwich was constructed using a conventional PAMPA instrument. 1 % agarose was used to improve the physical stability of the water phase. A linear correlation between log P and the apparent permeability was observed in the -0.24 < log P < 2.85 region (R2 = 0.98. RP-PAMPA was also applied to pKa measurement.

  1. Numerical modeling of parallel-plate based AMR

    DEFF Research Database (Denmark)

    In this work we present an improved 2-dimensional numerical model of a parallel-plate based AMR. The model includes heat transfer in fluid and magnetocaloric domains respectively. The domains are coupled via inner thermal boundaries. The MCE is modeled either as an instantaneous change between high...... comparison with experiment. This is used as a firm basis for predicting and optimizing performance of a large variety of regenerator configurations in order to study and learn the trends, tendencies and even absolute values of temperature span and cooling powers for the optimal (and buildable) designs...... in the direction not resolved through a realistic description of the thermal resistance between localized points in the bed and the ambient. The results show that the additions to the model place numerical modeling of AMR very close to the corresponding experimental results. Thus, the model is verified by direct...

  2. Modeling and Analysis of a 2-DOF Spherical Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Xuechao Duan

    2016-09-01

    Full Text Available The kinematics of a two rotational degrees-of-freedom (DOF spherical parallel manipulator (SPM is developed based on the coordinate transformation approach and the cosine rule of a trihedral angle. The angular displacement, angular velocity, and angular acceleration between the actuators and end-effector are thus determined. Moreover, the dynamic model of the 2-DOF SPM is established by using the virtual work principle and the first-order influence coefficient matrix of the manipulator. Eventually, a typical motion plan and simulations are carried out, and the actuating torque needed for these motions are worked out by employing the derived inverse dynamic equations. In addition, an analysis of the mechanical characteristics of the parallel manipulator is made. This study lays a solid base for the control of the 2-DOF SPM, and also provides the possibility of using this kind of spherical manipulator as a 2-DOF orientation, angular velocity, or even torque sensor.

  3. Modeling and Analysis of a 2-DOF Spherical Parallel Manipulator.

    Science.gov (United States)

    Duan, Xuechao; Yang, Yongzhi; Cheng, Bi

    2016-09-13

    The kinematics of a two rotational degrees-of-freedom (DOF) spherical parallel manipulator (SPM) is developed based on the coordinate transformation approach and the cosine rule of a trihedral angle. The angular displacement, angular velocity, and angular acceleration between the actuators and end-effector are thus determined. Moreover, the dynamic model of the 2-DOF SPM is established by using the virtual work principle and the first-order influence coefficient matrix of the manipulator. Eventually, a typical motion plan and simulations are carried out, and the actuating torque needed for these motions are worked out by employing the derived inverse dynamic equations. In addition, an analysis of the mechanical characteristics of the parallel manipulator is made. This study lays a solid base for the control of the 2-DOF SPM, and also provides the possibility of using this kind of spherical manipulator as a 2-DOF orientation, angular velocity, or even torque sensor.

  4. A simple image-reject mixer based on two parallel phase modulators

    Science.gov (United States)

    Hu, Dapeng; Zhao, Shanghong; Zhu, Zihang; Li, Xuan; Qu, Kun; Lin, Tao; Zhang, Kun

    2018-02-01

    A simple photonic microwave image-reject mixer (IRM) using two parallel phase modulators is proposed. First, a photonic microwave mixer with phase shift ability is achieved using two parallel phase modulators (PMs), an optical bandpass filter, three polarization controllers, three polarization beam splitters and two balanced photodetectors. At the output of the mixer, two frequency downconverted signals with tunable frequency difference can be obtained. By adjusting the phase difference as 90° and utilizing an electrical 90° hybrid, the useless components can be eliminated, and the image reject operation is realized. The key advantage of the proposed scheme is the usage of PM, which avoid the DC bias shifting problem and make the system simple and stable. A simulation is performed to verify the proposed scheme, a relative - 90° or 90° phase shift can be obtained between the two output ports of the photonic microwave mixer, at the output of the IRM, 60 dB image-reject ratio is obtained.

  5. A model of breakdown in parallel-plate detectors

    International Nuclear Information System (INIS)

    Fonte, P.

    1996-01-01

    Parallel-plate avalanche chambers (PPAC's) have many desirable properties, such as a fast, large area particle detector. However, the maximum gain is limited by a form of violent breakdown that limits the usefulness of this detector, despite its other evident qualities. The exact nature of this phenomenon is not yet sufficiently clear to sustain possible improvements. A previous experimental study is complemented in the present work by a quantitative model of the breakdown phenomenon in PPAC's, based on the streamer theory. The model reproduces well the peculiar behavior of the external current observed in PPAC's and resistive-plate chambers. Other breakdown properties measured in PPAC's are also well reproduced

  6. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  7. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  8. The JCSG MR pipeline: optimized alignments, multiple models and parallel searches

    International Nuclear Information System (INIS)

    Schwarzenbacher, Robert; Godzik, Adam; Jaroszewski, Lukasz

    2008-01-01

    The practical limits of molecular replacement can be extended by using several specifically designed protein models based on fold-recognition methods and by exhaustive searches performed in a parallelized pipeline. Updated results from the JCSG MR pipeline, which to date has solved 33 molecular-replacement structures with less than 35% sequence identity to the closest homologue of known structure, are presented. The success rate of molecular replacement (MR) falls considerably when search models share less than 35% sequence identity with their templates, but can be improved significantly by using fold-recognition methods combined with exhaustive MR searches. Models based on alignments calculated with fold-recognition algorithms are more accurate than models based on conventional alignment methods such as FASTA or BLAST, which are still widely used for MR. In addition, by designing MR pipelines that integrate phasing and automated refinement and allow parallel processing of such calculations, one can effectively increase the success rate of MR. Here, updated results from the JCSG MR pipeline are presented, which to date has solved 33 MR structures with less than 35% sequence identity to the closest homologue of known structure. By using difficult MR problems as examples, it is demonstrated that successful MR phasing is possible even in cases where the similarity between the model and the template can only be detected with fold-recognition algorithms. In the first step, several search models are built based on all homologues found in the PDB by fold-recognition algorithms. The models resulting from this process are used in parallel MR searches with different combinations of input parameters of the MR phasing algorithm. The putative solutions are subjected to rigid-body and restrained crystallographic refinement and ranked based on the final values of free R factor, figure of merit and deviations from ideal geometry. Finally, crystal packing and electron-density maps

  9. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  10. Phase behaviour and correlations of parallel hard squares: from highly confined to bulk systems

    International Nuclear Information System (INIS)

    González-Pinto, Miguel; Martínez-Ratón, Yuri; Varga, Szabolcs; Gurin, Peter; Velasco, Enrique

    2016-01-01

    We study a fluid of two-dimensional parallel hard squares in bulk and under confinement in channels, with the aim of evaluating the performance of fundamental-measure theory (FMT). To this purpose, we first analyse the phase behaviour of the bulk system using FMT and Percus–Yevick (PY) theory, and compare the results with molecular dynamics and Monte Carlo simulations. In a second step, we study the confined system and check the results against those obtained from the transfer matrix method and from our own Monte Carlo simulations. Squares are confined to channels with parallel walls at angles of 0° or 45° relative to the diagonals of the parallel hard squares, respectively, which allows for an assessment of the effect of the external-potential symmetry on the fluid structural properties. In general FMT overestimates bulk correlations, predicting the existence of a columnar phase (absent in simulations) prior to crystallization. The equation of state predicted by FMT compares well with simulations, although the PY approach with the virial route is better in some range of packing fractions. The FMT is highly accurate for the structure and correlations of the confined fluid due to the dimensional crossover property fulfilled by the theory. Both density profiles and equations of state of the confined system are accurately predicted by the theory. The highly non-uniform pair correlations inside the channel are also very well described by FMT. (paper)

  11. Computer model of a reverberant and parallel circuit coupling

    Science.gov (United States)

    Kalil, Camila de Andrade; de Castro, Maria Clícia Stelling; Cortez, Célia Martins

    2017-11-01

    The objective of the present study was to deepen the knowledge about the functioning of the neural circuits by implementing a signal transmission model using the Graph Theory in a small network of neurons composed of an interconnected reverberant and parallel circuit, in order to investigate the processing of the signals in each of them and the effects on the output of the network. For this, a program was developed in C language and simulations were done using neurophysiological data obtained in the literature.

  12. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  13. The influence of ocean conditions on two-phase flow instability in a parallel multi-channel system

    International Nuclear Information System (INIS)

    Yun Guo; Qiu, S.Z.; Su, G.H.; Jia, D.N.

    2008-01-01

    In this paper, the two-phase flow instability between multi-channels (FIBM) operating under ocean conditions is studied theoretically. The physical and mathematical model of the multi-channel system which is based on Lee and Pan [Lee, J.D., Pan, C., 1999. Dynamics of multiple parallel boiling channel systems with forced flows. Nuclear Engineering and Design 192, 31-44], is extended to analyze the influence of ocean conditions. The influence of ocean conditions on the FIBM is analyzed, particularly with respect to the periodic total mass flow rate and rolling motion. Furthermore, the instability oscillation trajectories of the multi-channel system are obtained on the phase plane of the inlet velocity and boiling boundary. Some of the trajectories show chaotic characteristic. The instability zone of a nine-channel system under rolling motion is then obtained

  14. Comparison of 3-D Synthetic Aperture Phased-Array Ultrasound Imaging and Parallel Beamforming

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Jensen, Jørgen Arendt

    2014-01-01

    This paper demonstrates that synthetic apertureimaging (SAI) can be used to achieve real-time 3-D ultra-sound phased-array imaging. It investigates whether SAI in-creases the image quality compared with the parallel beam-forming (PB) technique for real-time 3-D imaging. Data areobtained using both...... simulations and measurements with anultrasound research scanner and a commercially available 3.5-MHz 1024-element 2-D transducer array. To limit the probecable thickness, 256 active elements are used in transmit andreceive for both techniques. The two imaging techniques weredesigned for cardiac imaging, which...

  15. Mass-conserving subglacial hydrology in the Parallel Ice Sheet Model

    Science.gov (United States)

    Bueler, E.; Van Pelt, W.

    2014-07-01

    We describe and test a distributed subglacial hydrology model which combines a pressurized, plastic till with a system of water-filled, linked cavities which open through sliding-generated cavitation and close through ice creep. The addition of this sub-model to the Parallel Ice Sheet Model accomplishes three specific goals: (1) conservation of the mass of two-phase (solid/liquid) water in the ice sheet, (2) simulation of spatially- and temporally-variable basal shear stress from physical mechanisms based on a minimal number of free parameters, and (3) convergence under two-horizontal-dimensional grid refinement of the subglacial water amount and pressure. The model is a common generalization of at least four others: (i) the undrained plastic bed model of Tulaczyk et al. (2000b), (ii) a standard "routing" model used for identifying locations of subglacial lakes, (iii) the lumped englacial/subglacial model of Bartholomaus et al. (2011), and (iv) the elliptic-pressure-equation model of Schoof et al. (2012). We use englacial porosity as a regularization, and we preserve physical bounds on the pressure. In steady state the model generates a local functional relationship between water amount and pressure. We construct an exact solution of the coupled, steady equations which is used for verification of our explicit time-stepping, parallel numerical implementation. We demonstrate the model at scale by five year simulations of the entire Greenland ice sheet at 2 km horizontal resolution, with one million nodes in the hydrology grid.

  16. The JCSG MR pipeline: optimized alignments, multiple models and parallel searches.

    Science.gov (United States)

    Schwarzenbacher, Robert; Godzik, Adam; Jaroszewski, Lukasz

    2008-01-01

    The success rate of molecular replacement (MR) falls considerably when search models share less than 35% sequence identity with their templates, but can be improved significantly by using fold-recognition methods combined with exhaustive MR searches. Models based on alignments calculated with fold-recognition algorithms are more accurate than models based on conventional alignment methods such as FASTA or BLAST, which are still widely used for MR. In addition, by designing MR pipelines that integrate phasing and automated refinement and allow parallel processing of such calculations, one can effectively increase the success rate of MR. Here, updated results from the JCSG MR pipeline are presented, which to date has solved 33 MR structures with less than 35% sequence identity to the closest homologue of known structure. By using difficult MR problems as examples, it is demonstrated that successful MR phasing is possible even in cases where the similarity between the model and the template can only be detected with fold-recognition algorithms. In the first step, several search models are built based on all homologues found in the PDB by fold-recognition algorithms. The models resulting from this process are used in parallel MR searches with different combinations of input parameters of the MR phasing algorithm. The putative solutions are subjected to rigid-body and restrained crystallographic refinement and ranked based on the final values of free R factor, figure of merit and deviations from ideal geometry. Finally, crystal packing and electron-density maps are checked to identify the correct solution. If this procedure does not yield a solution with interpretable electron-density maps, then even more alternative models are prepared. The structurally variable regions of a protein family are identified based on alignments of sequences and known structures from that family and appropriate trimmings of the models are proposed. All combinations of these trimmings are

  17. Phase space simulation of collisionless stellar systems on the massively parallel processor

    International Nuclear Information System (INIS)

    White, R.L.

    1987-01-01

    A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem

  18. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  19. Parallel tools GUI framework-DOE SBIR phase I final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Galarowicz, James [Argo Navis Technologies LLC., Annapolis, MD (United States)

    2013-12-05

    Many parallel performance, profiling, and debugging tools require a graphical way of displaying the very large datasets typically gathered from high performance computing (HPC) applications. Most tool projects create their graphical user interfaces (GUI) from scratch, many times spending their project resources on simply redeveloping commonly used infrastructure. Our goal was to create a multiplatform GUI framework, based on Nokia/Digia’s popular Qt libraries, which will specifically address the needs of these parallel tools. The Parallel Tools GUI Framework (PTGF) uses a plugin architecture facilitating rapid GUI development and reduced development costs for new and existing tool projects by allowing the reuse of many common GUI elements, called “widgets.” Widgets created include, 2D data visualizations, a source code viewer with syntax highlighting, and integrated help and welcome screens. Application programming interface (API) design was focused on minimizing the time to getting a functional tool working. Having a standard, unified, and userfriendly interface which operates on multiple platforms will benefit HPC application developers by reducing training time and allowing users to move between tools rapidly during a single session. However, Argo Navis Technologies LLC will not be submitting a DOE SBIR Phase II proposal and commercialization plan for the PTGF project. Our preliminary estimates for gross income over the next several years was based upon initial customer interest and income generated by similar projects. Unfortunately, as we further assessed the market during Phase I, we grew to realize that there was not enough demand to warrant such a large investment. While we do find that the project is worth our continued investment of time and money, we do not think it worthy of the DOE's investment at this time. We are grateful that the DOE has afforded us the opportunity to make this assessment, and come to this conclusion.

  20. Stochastic modelling of two-phase flows including phase change

    International Nuclear Information System (INIS)

    Hurisse, O.; Minier, J.P.

    2011-01-01

    Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)

  1. A Model of Parallel Kinematics for Machine Calibration

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Bæk Nielsen, Morten; Kløve Christensen, Simon

    2016-01-01

    . This research identifies that the rapid lift and repositioning capabilities of delta robots can reduce defects on extruded 3D printed parts when compared to traditional Cartesian motion systems. This is largely due to the fact that repositioning is so rapid that the extruded strand is instantly broken......Parallel kinematics have been adopted by more than 25 manufacturers of high-end desktop 3D printers [Wohlers Report (2015), p.118] as well as by research projects such as the WASP project [WASP (2015)], a 12 meter tall linear delta robot for Additive Manufacture of large-scale components...... the operator with a strong tool for easing this task. The kinematics and calibration of delta robots, in particular, are less researched than that of traditional Cartesian robots, for which tried-and-true methods for calibrating are well known. A forwards and reverse virtual model of a delta robot has been...

  2. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  3. Multiphysics & Parallel Kinematics Modeling of a 3DOF MEMS Mirror

    Directory of Open Access Journals (Sweden)

    Mamat N.

    2015-01-01

    Full Text Available This paper presents a modeling for a 3DoF electrothermal actuated micro-electro-mechanical (MEMS mirror used to achieve scanning for optical coherence tomography (OCT imaging. The device is integrated into an OCT endoscopic probe, it is desired that the optical scanner have small footprint for minimum invasiveness, large and flat optical aperture for large scanning range, low driving voltage and low power consumption for safety reason. With a footprint of 2mm×2mm, the MEMS scanner which is also called as Tip-Tilt-Piston micro-mirror, can perform two rotations around x and y-axis and a vertical translation along z-axis. This work develops a complete model and experimental characterization. The modeling is divided into two parts: multiphysics characterization of the actuators and parallel kinematics studies of the overall system. With proper experimental procedures, we are able to validate the model via Visual Servoing Platform (ViSP. The results give a detailed overview on the performance of the mirror platform while varying the applied voltage at a stable working frequency. The paper also presents a discussion on the MEMS control system based on several scanning trajectories.

  4. Steady state flow analysis of two-phase natural circulation in multiple parallel channel loop

    Energy Technology Data Exchange (ETDEWEB)

    Bhusare, V.H. [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Bagul, R.K. [Reactor Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Joshi, J.B., E-mail: jbjoshi@gmail.com [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Department of Chemical Engineering, Institute of Chemical Technology, Matunga, Mumbai 400019 (India); Nayak, A.K. [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Reactor Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Kannan, Umasankari [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Reactor Physics Design Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Pilkhwal, D.S. [Reactor Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India); Vijayan, P.K. [Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Reactor Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400085 (India)

    2016-08-15

    Highlights: • Liquid circulation velocity increases with increasing superficial gas velocity. • Total two-phase pressure drop decreases with increasing superficial gas velocity. • Channels with larger driving force have maximum circulation velocities. • Good agreement between experimental and model predictions. - Abstract: In this work, steady state flow analysis has been carried out experimentally in order to estimate the liquid circulation velocities and two-phase pressure drop in air–water multichannel circulating loop. Experiments were performed in 15 channel circulating loop. Single phase and two-phase pressure drops in the channels have been measured experimentally and have been compared with theoretical model of Joshi et al. (1990). Experimental measurements show good agreement with model.

  5. A time-variant analysis of the 1/f^(2) phase noise in CMOS parallel LC-Tank quadrature oscillators

    DEFF Research Database (Denmark)

    Andreani, Pietro

    2006-01-01

    This paper presents a study of 1/f2 phase noise in quadrature oscillators built by connecting two differential LC-tank oscillators in a parallel fashion. The analysis clearly demonstrates the necessity of adopting a time-variant theory of phase noise, where a more simplistic, time......-invariant approach fails to explain numerical simulation results even at the qualitative level. Two topologies of 5-GHz parallel quadrature oscillators are considered, and compact but nevertheless highly general, closed-form formulas are derived for the phase noise caused by the losses in the LC...

  6. Known-plaintext attack on the double phase encoding and its implementation with parallel hardware

    Science.gov (United States)

    Wei, Hengzheng; Peng, Xiang; Liu, Haitao; Feng, Songlin; Gao, Bruce Z.

    2008-03-01

    A known-plaintext attack on the double phase encryption scheme implemented with parallel hardware is presented. The double random phase encoding (DRPE) is one of the most representative optical cryptosystems developed in mid of 90's and derives quite a few variants since then. Although the DRPE encryption system has a strong power resisting to a brute-force attack, the inherent architecture of DRPE leaves a hidden trouble due to its linearity nature. Recently the real security strength of this opto-cryptosystem has been doubted and analyzed from the cryptanalysis point of view. In this presentation, we demonstrate that the optical cryptosystems based on DRPE architecture are vulnerable to known-plain text attack. With this attack the two encryption keys in the DRPE can be accessed with the help of the phase retrieval technique. In our approach, we adopt hybrid input-output algorithm (HIO) to recover the random phase key in the object domain and then infer the key in frequency domain. Only a plaintext-ciphertext pair is sufficient to create vulnerability. Moreover this attack does not need to select particular plaintext. The phase retrieval technique based on HIO is an iterative process performing Fourier transforms, so it fits very much into the hardware implementation of the digital signal processor (DSP). We make use of the high performance DSP to accomplish the known-plaintext attack. Compared with the software implementation, the speed of the hardware implementation is much fast. The performance of this DSP-based cryptanalysis system is also evaluated.

  7. Massively Parallel Haplotyping on Microscopic Beads for the High-Throughput Phase Analysis of Single Molecules

    Science.gov (United States)

    Tiemann-Boege, Irene

    2012-01-01

    In spite of the many advances in haplotyping methods, it is still very difficult to characterize rare haplotypes in tissues and different environmental samples or to accurately assess the haplotype diversity in large mixtures. This would require a haplotyping method capable of analyzing the phase of single molecules with an unprecedented throughput. Here we describe such a haplotyping method capable of analyzing in parallel hundreds of thousands single molecules in one experiment. In this method, multiple PCR reactions amplify different polymorphic regions of a single DNA molecule on a magnetic bead compartmentalized in an emulsion drop. The allelic states of the amplified polymorphisms are identified with fluorescently labeled probes that are then decoded from images taken of the arrayed beads by a microscope. This method can evaluate the phase of up to 3 polymorphisms separated by up to 5 kilobases in hundreds of thousands single molecules. We tested the sensitivity of the method by measuring the number of mutant haplotypes synthesized by four different commercially available enzymes: Phusion, Platinum Taq, Titanium Taq, and Phire. The digital nature of the method makes it highly sensitive to detecting haplotype ratios of less than 1∶10,000. We also accurately quantified chimera formation during the exponential phase of PCR by different DNA polymerases. PMID:22558329

  8. Communication Improvement for the LU NAS Parallel Benchmark: A Model for Efficient Parallel Relaxation Schemes

    Science.gov (United States)

    Yarrow, Maurice; VanderWijngaart, Rob; Kutler, Paul (Technical Monitor)

    1997-01-01

    The first release of the MPI version of the LU NAS Parallel Benchmark (NPB2.0) performed poorly compared to its companion NPB2.0 codes. The later LU release (NPB2.1 & 2.2) runs up to two and a half times faster, thanks to a revised point access scheme and related communications scheme. The new scheme sends substantially fewer messages. is cache "friendly", and has a better load balance. We detail the, observations and modifications that resulted in this efficiency improvement, and show that the poor behavior of the original code resulted from deriving a message passing scheme from an algorithm originally devised for a vector architecture.

  9. Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows

    International Nuclear Information System (INIS)

    D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza

    2005-01-01

    Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations

  10. Design and Control of Parallel Three Phase Voltage Source Inverters in Low Voltage AC Microgrid

    Directory of Open Access Journals (Sweden)

    El Hassane Margoum

    2017-01-01

    Full Text Available Design and hierarchical control of three phase parallel Voltage Source Inverters are developed in this paper. The control scheme is based on synchronous reference frame and consists of primary and secondary control levels. The primary control consists of the droop control and the virtual output impedance loops. This control level is designed to share the active and reactive power correctly between the connected VSIs in order to avoid the undesired circulating current and overload of the connected VSIs. The secondary control is designed to clear the magnitude and the frequency deviations caused by the primary control. The control structure is validated through dynamics simulations.The obtained results demonstrate the effectiveness of the control structure.

  11. Parallel optical sorting of biological cells using the generalized phase contrast method

    DEFF Research Database (Denmark)

    Rindorf, Lars; Bu, Minqiang; Glückstad, Jesper

    2014-01-01

    Optical forces are used to fixate biological cells with optical tweezers where numerous biological parameters and phenomena can be studied. Optical beams carry a small momentum which generates a weak optical force, but on a cellular level this force is strong enough to allow for manipulation...... of biological cells in microfluidic systems exclusively using light. We demonstrate an optical cell sorter that uses simultaneous manipulation by multiple laser beams using the Generalized Phase Contrast method (GPC). The basic principle in an optical sorter is that the radiation force of the optical beam can...... push the biological cell from one microfluidic sheath flow to another. By incorporating a spatial light modulator the manipulation can be made parallel with multiple laser beams. We claim advantages over the serial optical sorters with only a single laser beam that has been demonstrated by others....

  12. Resin Capsules: Permeable Containers for Parallel/Combinatorial Solid-Phase Organic Synthesis

    OpenAIRE

    Bouillon, Isabelle; Soural, Miroslav; Krchňák, Viktor

    2008-01-01

    A resin capsule is a permeable container for resin beads designed for multiple/combinatorial solid-phase organic synthesis. Resin capsules consist of a high density polyethylene ring sealed with peek mesh on both sides. The cylindrical shape of resin capsules enabled space-saving packing into plastic column-like reaction vessels commonly used for solid-phase organic synthesis. Resin capsules have been evaluated for their use in combinatorial synthesis, and a set of model compounds with excell...

  13. Simultaneous determination of acetylsalicylic acid, paracetamol and caffeine using solid-phase molecular fluorescence and parallel factor analysis.

    Science.gov (United States)

    Alves, Julio Cesar L; Poppi, Ronei J

    2009-05-29

    This paper describes the determination of acetylsalicylic acid (ASA), paracetamol and caffeine in pharmaceutical formulations using solid-phase molecular fluorescence and second order multivariate calibration. This methodology is applicable even in the presence of unknown interferences and with spectral overlap of the components in the mixture. Parallel factor analysis (PARAFAC) was used for model development, whose effectiveness was demonstrated by analysis of variance (ANOVA). Errors below 10% were obtained for all compounds using an external validation set. Benefits of the new procedures not included in the reference methods such as low cost, no need of sample preparation, simple and fast analysis using fluorescence spectrometer and no generation of waste, make this method very attractive, allowing for the simultaneous determination of compounds with good reproducibility and accuracy.

  14. SCORPIO: A Scalable Two-Phase Parallel I/O Library With Application To A Large Scale Subsurface Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Sripathi, Vamsi [Intel Corporation; Mills, Richard T [ORNL; Hammond, Glenn [Pacific Northwest National Laboratory (PNNL); Mahinthakumar, Kumar [North Carolina State University

    2013-01-01

    Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting the I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.

  15. An improved design of virtual output impedance loop for droop-controlled parallel three-phase Voltage Source Inverters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Blaabjerg, Frede; Chen, Zhe

    2012-01-01

    The virtual output impedance loop is known as an effective way to enhance the load sharing stability and quality of droop-controlled parallel inverters. This paper proposes an improved design of virtual output impedance loop for parallel three-phase voltage source inverters. In the approach......-sequence virtual resistance even in the case of feeding a balanced three-phase load. Furthermore, to adapt to the variety of unbalanced loads, a dynamically-tuned negative-sequence resistance loop is designed, such that a good compromise between the quality of inverter output voltage and the performance of load...... sharing can be obtained. Finally, laboratory test results of two parallel three-phase voltage source inverters are shown to confirm the validity of the proposed method....

  16. A bio-mathematical model for parallel organs and its use in ranking radiation treatment plans.

    Science.gov (United States)

    Wang, Li; Li, Wenhui; Bai, Han; Chang, Li; Qin, Jiyong; Hou, Yu

    2012-12-01

    To develop a new bio-mathematical model, named LQ-based parallel-organ model, that can overcome the limitation of interpreting the simple dose-volume information so as to rank the radio- toxicity of parallel organs in the same patient. A parallel organ consists of Function Subunits (FSUs), with each FSU being equal and representative in functional status. Based on the Linear-Quadratic model (LQ model), we had derived a bio-mathematical model to calculate the survival cell number for radiation dose response. We then compared the cell survival number for the ranking of treatment plans for the same patient. Ninety 3D plans from forty-five randomly selected lung cancer patients were generated using the ELEKTA precise 2.12 treatment planning system. The LQ-based parallel-organ model was tested against the widely used Lyman-Kutcher-Burman model (LKB model). There was no distinct statistical difference in plan ranking between using the LQ-based parallel-organ model and the LKB model (P = 0.475). Ranking plans by the V(x), Mean Lung Dose (MLD) and the LQ-based parallel-organ model shows that there was no distinct statistical difference between V(5), V(10), V(20), MLD and the LQ-based parallel-organ model, respectively (all Ps > 0.05). The proposed LQ-based parallel-organ model was found to be efficient and reliable for ranking treatment plans for the same patient.

  17. High-Performance Control of Paralleled Three-Phase Inverters for Residential Microgrid Architectures Based on Online Uninterruptable Power Systems

    DEFF Research Database (Denmark)

    Zhang, Chi; Guerrero, Josep M.; Vasquez, Juan Carlos

    2015-01-01

    In this paper, a control strategy for the parallel operation of three-phase inverters forming an online uninterruptible power system (UPS) is presented. The UPS system consists of a cluster of paralleled inverters with LC filters directly connected to an AC critical bus and an AC/DC forming a DC...... synchronization with an external real/fictitious utility, and critical bus voltage restoration. Constant transient and steady-state frequency, active, reactive and harmonic power sharing, and global phase-locked loop resynchronization capability are achieved. Detailed system topology and control architecture...

  18. Proposal of numerical model for current distribution analysis in high temperature superconducting parallel conductor

    Energy Technology Data Exchange (ETDEWEB)

    Watabe, Akira; Fukui, Satoshi; Sato, Takao; Yamaguchi, Mitsugi

    2004-10-01

    A numerical model to calculate current density distribution in a parallel conductor assembled by multiple high temperature superconducting tapes was proposed. The numerical calculations on the current distribution in the parallel conductor of three high-temperature superconducting tapes were performed by using the developed model. The numerical results showed that the current density distribution in the parallel conductor were affected by the tape arrangement in the conductor.

  19. Phase transition in tensor models

    Energy Technology Data Exchange (ETDEWEB)

    Delepouve, Thibault [Laboratoire de Physique Théorique, CNRS UMR 8627, Université Paris Sud,91405 Orsay Cedex (France); Centre de Physique Théorique, CNRS UMR 7644, École Polytechnique,91128 Palaiseau Cedex (France); Gurau, Razvan [Centre de Physique Théorique, CNRS UMR 7644, École Polytechnique,91128 Palaiseau Cedex (France); Perimeter Institute for Theoretical Physics,31 Caroline St. N, N2L 2Y5, Waterloo, ON (Canada)

    2015-06-25

    Generalizing matrix models, tensor models generate dynamical triangulations in any dimension and support a 1/N expansion. Using the intermediate field representation we explicitly rewrite a quartic tensor model as a field theory for a fluctuation field around a vacuum state corresponding to the resummation of the entire leading order in 1/N (a resummation of the melonic family). We then prove that the critical regime in which the continuum limit in the sense of dynamical triangulations is reached is precisely a phase transition in the field theory sense for the fluctuation field.

  20. Stiffness Model of a 3-DOF Parallel Manipulator with Two Additional Legs

    Directory of Open Access Journals (Sweden)

    Guang Yu

    2014-10-01

    Full Text Available This paper investigates the stiffness modelling of a 3-DOF parallel manipulator with two additional legs. The stiffness model in six directions of the 3-DOF parallel manipulator with two additional legs is derived by performing condensation of DOFs for the joint connection and treatment of the fixed-end connections. Moreover, this modelling method is used to derive the stiffness model of the manipulator with zero/one additional legs. Two performance indices are given to compare the stiffness of the parallel manipulators with two additional legs with those of the manipulators with zero/one additional legs. The method not only can be used to derive the stiffness model of a redundant parallel manipulator, but also to model the stiffness of non-redundant parallel manipulators.

  1. Sliding-mode control of a six-phase series/parallel connected two induction motors drive.

    Science.gov (United States)

    Abjadi, Navid R

    2014-11-01

    In this paper, a parallel configuration is proposed for two quasi six-phase induction motors (QIMs) to feed them from a single six-phase voltage source inverter (VSI). A direct torque control (DTC) based on input-output feedback linearization (IOFL) combined with sliding mode (SM) control is used for each QIM in stationary reference frame. In addition, an adaptive scheme is employed to solve the motor resistances mismatching problem. The effectiveness and capability of the proposed method are shown by practical results obtained for two QIMs in series/parallel connections supplied from a single VSI. The decoupling control of QIMs and the feasibility of their torque and flux control are investigated. Moreover, a complete comparison between series and parallel connections of two QIMs is given. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Parallel Algorithm for Solving TOV Equations for Sequence of Cold and Dense Nuclear Matter Models

    Science.gov (United States)

    Ayriyan, Alexander; Buša, Ján; Grigorian, Hovik; Poghosyan, Gevorg

    2018-04-01

    We have introduced parallel algorithm simulation of neutron star configurations for set of equation of state models. The performance of the parallel algorithm has been investigated for testing set of EoS models on two computational systems. It scales when using with MPI on modern CPUs and this investigation allowed us also to compare two different types of computational nodes.

  3. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Directory of Open Access Journals (Sweden)

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  4. Two-phase flow steam generator simulations on parallel computers using domain decomposition method

    International Nuclear Information System (INIS)

    Belliard, M.

    2003-01-01

    Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)

  5. A design procedure for the phase-controlled parallel-loaded resonant inverter

    Science.gov (United States)

    King, Roger J.

    1989-01-01

    High-frequency-link power conversion and distribution based on a resonant inverter (RI) has been recently proposed. The design of several topologies is reviewed, and a simple approximate design procedure is developed for the phase-controlled parallel-loaded RI. This design procedure seeks to ensure the benefits of resonant conversion and is verified by data from a laboratory 2.5 kVA, 20-kHz converter. A simple phasor analysis is introduced as a useful approximation for design purposes. The load is considered to be a linear impedance (or an ac current sink). The design procedure is verified using a 2.5-kVA 20-kHz RI. Also obtained are predictable worst-case ratings for each component of the resonant tank circuit and the inverter switches. For a given load VA requirement, below-resonance operation is found to result in a significantly lower tank VA requirement. Under transient conditions such as load short-circuit, a reversal of the expected commutation sequence is possible.

  6. Cross-Circulating Current Suppression Method for Parallel Three-Phase Two-Level Inverters

    DEFF Research Database (Denmark)

    Wei, Baoze; Guerrero, Josep M.; Guo, Xiaoqiang

    2015-01-01

    The parallel architecture is very popular for power inverters to increase the power level. This paper presents a method for the parallel operation of inverters in an ac-distributed system, to suppress the cross-circulating current based on virtual impedance without current-sharing bus and communi...... and communication bus. Simulation and experimental results verify the effectiveness of the control method.......The parallel architecture is very popular for power inverters to increase the power level. This paper presents a method for the parallel operation of inverters in an ac-distributed system, to suppress the cross-circulating current based on virtual impedance without current-sharing bus...

  7. Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing

    OpenAIRE

    Hausmans, J.P.H.M.; Geuns, S.J.; Wiggers, M.H.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Real-time stream processing applications such as software defined radios are usually executed concurrently on multiprocessor systems. Exploiting coarse-grained data parallelism by duplicating tasks is often required, besides pipeline parallelism, to meet the temporal constraints of the applications. However, no unified model and analysis method exists that can be used to determine the required amount of data and pipeline parallelism, and buffer sizes simultaneously. This paper presents an ana...

  8. Parallel implementation of high-speed, phase diverse atmospheric turbulence compensation method on a neural network-based architecture

    Science.gov (United States)

    Arrasmith, William W.; Sullivan, Sean F.

    2008-04-01

    Phase diversity imaging methods work well in removing atmospheric turbulence and some system effects from predominantly near-field imaging systems. However, phase diversity approaches can be computationally intensive and slow. We present a recently adapted, high-speed phase diversity method using a conventional, software-based neural network paradigm. This phase-diversity method has the advantage of eliminating many time consuming, computationally heavy calculations and directly estimates the optical transfer function from the entrance pupil phases or phase differences. Additionally, this method is more accurate than conventional Zernike-based, phase diversity approaches and lends itself to implementation on parallel software or hardware architectures. We use computer simulation to demonstrate how this high-speed, phase diverse imaging method can be implemented on a parallel, highspeed, neural network-based architecture-specifically the Cellular Neural Network (CNN). The CNN architecture was chosen as a representative, neural network-based processing environment because 1) the CNN can be implemented in 2-D or 3-D processing schemes, 2) it can be implemented in hardware or software, 3) recent 2-D implementations of CNN technology have shown a 3 orders of magnitude superiority in speed, area, or power over equivalent digital representations, and 4) a complete development environment exists. We also provide a short discussion on processing speed.

  9. The Potsdam Parallel Ice Sheet Model (PISM-PIK – Part 1: Model description

    Directory of Open Access Journals (Sweden)

    R. Winkelmann

    2011-09-01

    Full Text Available We present the Potsdam Parallel Ice Sheet Model (PISM-PIK, developed at the Potsdam Institute for Climate Impact Research to be used for simulations of large-scale ice sheet-shelf systems. It is derived from the Parallel Ice Sheet Model (Bueler and Brown, 2009. Velocities are calculated by superposition of two shallow stress balance approximations within the entire ice covered region: the shallow ice approximation (SIA is dominant in grounded regions and accounts for shear deformation parallel to the geoid. The plug-flow type shallow shelf approximation (SSA dominates the velocity field in ice shelf regions and serves as a basal sliding velocity in grounded regions. Ice streams can be identified diagnostically as regions with a significant contribution of membrane stresses to the local momentum balance. All lateral boundaries in PISM-PIK are free to evolve, including the grounding line and ice fronts. Ice shelf margins in particular are modeled using Neumann boundary conditions for the SSA equations, reflecting a hydrostatic stress imbalance along the vertical calving face. The ice front position is modeled using a subgrid-scale representation of calving front motion (Albrecht et al., 2011 and a physically-motivated calving law based on horizontal spreading rates. The model is tested in experiments from the Marine Ice Sheet Model Intercomparison Project (MISMIP. A dynamic equilibrium simulation of Antarctica under present-day conditions is presented in Martin et al. (2011.

  10. Optical path difference measurements with a two-step parallel phase shifting interferometer based on a modified Michelson configuration

    Science.gov (United States)

    Toto-Arellano, Noel Ivan; Serrano-Garcia, David I.; Rodriguez-Zurita, Gustavo

    2017-09-01

    We report an optical implementation of a parallel phase-shifting quasi-common path interferometer using two modified Michelson interferometers to generate two interferograms. By using a displaceable polarizer's array, placed on the image plane, we can obtain four phase-shifted interferograms in two captures. The system operates as a quasi-common path interferometer generating four beams, which are to interfere with alignment procedures on the mirrors of the Michelson configurations. The optical phase data are retrieved using the well-known four-step algorithms. To present the capabilities of the system, experimental results obtained from transparent structures are presented.

  11. Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.

    Science.gov (United States)

    Kundeti, Vamsi; Rajasekaran, Sanguthevar

    2011-11-01

    In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.

  12. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  13. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  14. Queueing-theoretic solution methods for models of parallel and distributed systems

    NARCIS (Netherlands)

    O.J. Boxma (Onno); G.M. Koole (Ger); Z. Liu

    1994-01-01

    textabstractThis paper aims to give an overview of solution methods for the performance analysis of parallel and distributed systems. After a brief review of some important general solution methods, we discuss key models of parallel and distributed systems, and optimization issues, from the

  15. ''A Parallel Adaptive Simulation Tool for Two Phase Steady State Reacting Flows in Industrial Boilers and Furnaces''; FINAL

    International Nuclear Information System (INIS)

    Michael J. Bockelie

    2002-01-01

    This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requires a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International

  16. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  17. A Parallel Geometry and Mesh Infrastructure for Explicit Phase Tracking in Multiphase Problems

    Science.gov (United States)

    Yang, Fan; Chandra, Anirban; Zhang, Yu; Shams, Ehsan; Tendulkar, Saurabh; Nastasia, Rocco; Oberai, Assad; Shephard, Mark; Sahni, Onkar

    2017-11-01

    Numerical simulations with explicit phase/interface tracking in a multiphase medium impact many applications. One such example is a combusting solid involving phase change. In these problems explicit tracking is crucial to accurately model and capture the interface physics, for example, discontinuous fields at the interface such as density or normal velocity. A necessary capability in an explicit approach is the evolution of the geometry and mesh during the simulation. In this talk, we will present an explicit approach that employs a combination of mesh motion and mesh modification on distributed/partitioned meshes. At the interface, a Lagrangian frame is employed on a discrete geometric description, while an arbitrary Lagrangian-Eulerian (ALE) frame is used elsewhere with arbitrary mesh motion. Mesh motion is based on the linear elasticity analogy that is applied until mesh deformation leads to undesirable cells, at which point local mesh modification is used to adapt the mesh. In addition, at the interface the structure and normal resolution of the highly anisotropic layered elements is adaptively maintained. We will demonstrate our approach for problems with large interface motions. Topological changes in the geometry (of any phase) will be considered in the future. This work is supported by the U.S. Army Grants W911NF1410301 and W911NF16C0117.

  18. Decentralized Nonlinear Controller Based SiC Parallel DC-DC Converter, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is aimed at demonstrating the feasibility of a Decentralized Control based SiC Parallel DC-DC Converter Unit (DDCU) with targeted application for...

  19. Building Blocks for the Rapid Development of Parallel Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Scientists need to be able to quickly develop and run parallel simulations without paying the high price of writing low-level message passing codes using compiled...

  20. Domain Specific Language for Geant4 Parallelization for Space-based Applications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A major limiting factor in HPC growth is the requirement to parallelize codes to leverage emerging architectures, especially as single core performance has plateaued...

  1. Comparison of Three Different Parallel Computation Methods for a Two-Dimensional Dam-Break Model

    Directory of Open Access Journals (Sweden)

    Shanghong Zhang

    2017-01-01

    Full Text Available Three parallel methods (OpenMP, MPI, and OpenACC are evaluated for the computation of a two-dimensional dam-break model using the explicit finite volume method. A dam-break event in the Pangtoupao flood storage area in China is selected as a case study to demonstrate the key technologies for implementing parallel computation. The subsequent acceleration of the methods is also evaluated. The simulation results show that the OpenMP and MPI parallel methods achieve a speedup factor of 9.8× and 5.1×, respectively, on a 32-core computer, whereas the OpenACC parallel method achieves a speedup factor of 20.7× on NVIDIA Tesla K20c graphics card. The results show that if the memory required by the dam-break simulation does not exceed the memory capacity of a single computer, the OpenMP parallel method is a good choice. Moreover, if GPU acceleration is used, the acceleration of the OpenACC parallel method is the best. Finally, the MPI parallel method is suitable for a model that requires little data exchange and large-scale calculation. This study compares the efficiency and methodology of accelerating algorithms for a dam-break model and can also be used as a reference for selecting the best acceleration method for a similar hydrodynamic model.

  2. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    Directory of Open Access Journals (Sweden)

    Miroslaw Luft

    2008-01-01

    Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  3. Mathematical model of thyristor inverter including a series-parallel resonant circuit

    OpenAIRE

    Luft, M.; Szychta, E.

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with the aid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  4. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    OpenAIRE

    Miroslaw Luft; Elzbieta Szychta

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  5. Modelling distribution of evaporating CO2 in parallel minichannels

    DEFF Research Database (Denmark)

    Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian

    2010-01-01

    -known empirical correlations for calculating frictional pressure drop and heat transfer coefficients. An investigation of different correlations for boiling two-phase flow shows that the choice of correlation is insignificant regarding the overall results. It is shown that non-uniform airflow leads...

  6. Parallel Development of Products and New Business Models

    DEFF Research Database (Denmark)

    Lund, Morten; Hansen, Poul H. Kyvsgård

    2014-01-01

    The perception of product development and the practical execution of product development in professional organizations have undergone dramatic changes in recent years. Many of these chances relate to introduction of broader and more cross-disciplinary views that involves new organizational functi...... and innovation management the 4th generation models are increasingly including the concept business models and business model innovation....

  7. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  8. Parallel direct solver for finite element modeling of manufacturing processes

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, P.A.F.

    2017-01-01

    The central processing unit (CPU) time is of paramount importance in finite element modeling of manufacturing processes. Because the most significant part of the CPU time is consumed in solving the main system of equations resulting from finite element assemblies, different approaches have been...... developed to optimize solutions and reduce the overall computational costs of large finite element models....

  9. Using parallel computing in modeling and optimization of mineral ...

    African Journals Online (AJOL)

    UPIT), or a maximum weight closure problem. There are several method for solving this problem. We provide new approach, for solving ultimate pit limit problem using precedence model. Block model of open pit can be easily represented as an ...

  10. Application of Parallel Algorithms in an Air Pollution Model

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  11. Parallel performance of TORT on the CRAY J90: Model and measurement

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of the code

  12. Analysis and Modeling of Circulating Current in Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand

    2015-01-01

    Parallel-connected inverters are gaining attention for high power applications because of the limited power handling capability of the power modules. Moreover, the parallel-connected inverters may have low total harmonic distortion of the ac current if they are operated with the interleaved pulse...... this model, the circulating current between two parallel-connected inverters is analysed in this study. The peak and root mean square (rms) values of the normalised circulating current are calculated for different PWM methods, which makes this analysis a valuable tool to design a filter for the circulating...

  13. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  14. Modeling Parallelization and Flexibility Improvements in Skill Acquisition: From Dual Tasks to Complex Dynamic Skills

    Science.gov (United States)

    Taatgen, Niels

    2005-01-01

    Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model gradually learns task-specific rules from instructions…

  15. Solving the dynamic equations of a 3-PRS Parallel Manipulator for efficient model-based designs

    Directory of Open Access Journals (Sweden)

    M. Díaz-Rodríguez

    2016-01-01

    Full Text Available Introduction of parallel manipulator systems for different applications areas has influenced many researchers to develop techniques for obtaining accurate and computational efficient inverse dynamic models. Some subject areas make use of these models, such as, optimal design, parameter identification, model based control and even actuation redundancy approaches. In this context, by revisiting some of the current computationally-efficient solutions for obtaining the inverse dynamic model of parallel manipulators, this paper compares three different methods for inverse dynamic modelling of a general, lower mobility, 3-PRS parallel manipulator. The first method obtains the inverse dynamic model by describing the manipulator as three open kinematic chains. Then, vector-loop closure constraints are introduced for obtaining the relationship between the dynamics of the open kinematic chains (such as a serial robot and the closed chains (such as a parallel robot. The second method exploits certain characteristics of parallel manipulators such that the platform and the links are considered as independent subsystems. The proposed third method is similar to the second method but it uses a different Jacobian matrix formulation in order to reduce computational complexity. Analysis of these numerical formulations will provide fundamental software support for efficient model-based designs. In addition, computational cost reduction presented in this paper can also be an effective guideline for optimal design of this type of manipulator and for real-time embedded control.

  16. Parallelized Genetic Identification of the Thermal-Electrochemical Model for Lithium-Ion Battery

    Directory of Open Access Journals (Sweden)

    Liqiang Zhang

    2013-01-01

    Full Text Available The parameters of a well predicted model can be used as health characteristics for Lithium-ion battery. This article reports a parallelized parameter identification of the thermal-electrochemical model, which significantly reduces the time consumption of parameter identification. Since the P2D model has the most predictability, it is chosen for further research and expanded to the thermal-electrochemical model by coupling thermal effect and temperature-dependent parameters. Then Genetic Algorithm is used for parameter identification, but it takes too much time because of the long time simulation of model. For this reason, a computer cluster is built by surplus computing resource in our laboratory based on Parallel Computing Toolbox and Distributed Computing Server in MATLAB. The performance of two parallelized methods, namely Single Program Multiple Data (SPMD and parallel FOR loop (PARFOR, is investigated and then the parallelized GA identification is proposed. With this method, model simulations running parallelly and the parameter identification could be speeded up more than a dozen times, and the identification result is batter than that from serial GA. This conclusion is validated by model parameter identification of a real LiFePO4 battery.

  17. Rapid parallelization of the drift-diffusion model for semiconductor devices

    OpenAIRE

    Gazzaniga, Giovanna; Lanucara, Piero; Pietra, Paola; Rovida, Sergio; Sacchi, Gianni

    2002-01-01

    The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve, in a rapid way, the performance of a numerical serial code modelling semiconductor devices, exploiting the parallel features of shared memory architectures. OpenMP seems to be the good choice in order to guarantee the portability, that is one of the big issues in parall...

  18. A horn shaped air-cooled 300 MHz applicator for use in single field or parallel opposed phased system (POPAS)

    International Nuclear Information System (INIS)

    Bicher, H.I.; Moore, D.W.

    1985-01-01

    A 20cm x 23cm standard design 300 MHz horn waveguide may be used to heat tumors at moderate depth. The heating patterns have been observed using two such applicators excited in phase in parallel opposition about phantoms and living subjects. Both depth and homogeneity of heating profiles were greatly enhanced using the parallel opposed system by the convergence of the waveforms. Experiments performed in pigs and human volunteers show the feasibility of heating several anatomical regions. The pelvis, neck, axilla, limbs and single lung regions provide good candidates for this treatment method. The homogeneity of and treatment accessibility of in vivo fields and counterplay of high and low blood flow regions in normal and tumorous tissue are currently under study

  19. Modeling the Fracture of Ice Sheets on Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Waisman, Haim [Columbia Univ., New York, NY (United States); Tuminaro, Ray [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-10-10

    The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.

  20. Animal Models of Cystic Fibrosis Pathology: Phenotypic Parallels and Divergences

    Directory of Open Access Journals (Sweden)

    Gillian M. Lavelle

    2016-01-01

    Full Text Available Cystic fibrosis (CF is caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR gene. The resultant characteristic ion transport defect results in decreased mucociliary clearance, bacterial colonisation, and chronic neutrophil-dominated inflammation. Much knowledge surrounding the pathophysiology of the disease has been gained through the generation of animal models, despite inherent limitations in each. The failure of certain mouse models to recapitulate the phenotypic manifestations of human disease has initiated the generation of larger animals in which to study CF, including the pig and the ferret. This review will summarise the basic phenotypes of three animal models and describe the contributions of such animal studies to our current understanding of CF.

  1. Parallel direct solver for finite element modeling of manufacturing processes

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, P.A.F.

    2017-01-01

    The central processing unit (CPU) time is of paramount importance in finite element modeling of manufacturing processes. Because the most significant part of the CPU time is consumed in solving the main system of equations resulting from finite element assemblies, different approaches have been d...

  2. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  3. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  4. Parallel computer processing and modeling: applications for the ICU

    Science.gov (United States)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  5. A massively parallel GPU-accelerated model for analysis of fully nonlinear free surface waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Madsen, Morten G.; Glimberg, Stefan Lemvig

    2011-01-01

    We implement and evaluate a massively parallel and scalable algorithm based on a multigrid preconditioned Defect Correction method for the simulation of fully nonlinear free surface flows. The simulations are based on a potential model that describes wave propagation over uneven bottoms in three...... space dimensions and is useful for fast analysis and prediction purposes in coastal and offshore engineering. A dedicated numerical model based on the proposed algorithm is executed in parallel by utilizing affordable modern special purpose graphics processing unit (GPU). The model is based on a low......-storage flexible-order accurate finite difference method that is known to be efficient and scalable on a CPU core (single thread). To achieve parallel performance of the relatively complex numerical model, we investigate a new trend in high-performance computing where many-core GPUs are utilized as high...

  6. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  7. effects of parallel channel interactions on two-phase flow split in ...

    African Journals Online (AJOL)

    Dr Obe

    1982-09-01

    Sep 1, 1982 ... varied so as to simulate different flow phenomena which might occur during a loss ... QCV - Quick Closing Valves. 2ϕ - Two-phase flow. lϕ - Single phase flow α - Void fraction. X - Flow quality. UP - Upper Plenum. LP - Lower Plenum. W - Flow rate kg/hr .... evident that gradual introduction of vapour into the ...

  8. Power Factor Correction Capacitors for Multiple Parallel Three-Phase ASD Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Today’s three-phase Adjustable Speed Drive (ASD) systems still employ Diode Rectifiers (DRs) and Silicon-Controlled Rectifiers (SCRs) as the front-end converters due to structural and control simplicity, small volume, low cost, and high reliability. However, the uncontrollable DRs and phase-contr...

  9. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    Science.gov (United States)

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze

    1998-01-01

    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc.

  10. MCBooster: a library for fast Monte Carlo generation of phase-space decays on massively parallel platforms.

    Science.gov (United States)

    Alves Júnior, A. A.; Sokoloff, M. D.

    2017-10-01

    MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments

  11. Dynamic Modeling of Phase Crossings in Two-Phase Flow

    DEFF Research Database (Denmark)

    Madsen, Søren; Veje, Christian; Willatzen, Morten

    2012-01-01

    Two-phase flow and heat transfer, such as boiling and condensing flows, are complicated physical phenomena that generally prohibit an exact solution and even pose severe challenges for numerical approaches. If numerical solution time is also an issue the challenge increases even further. We present...... here a numerical implementation and novel study of a fully distributed dynamic one-dimensional model of two-phase flow in a tube, including pressure drop, heat transfer, and variations in tube cross-section. The model is based on a homogeneous formulation of the governing equations, discretized...... of the variables and are usually very slow to evaluate. To overcome these challenges, we use an interpolation scheme with local refinement. The simulations show that the method handles crossing of the saturation lines for both liquid to two-phase and two-phase to gas regions. Furthermore, a novel result obtained...

  12. A Tool for Performance Modeling of Parallel Programs

    Directory of Open Access Journals (Sweden)

    J.A. González

    2003-01-01

    Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.

  13. A one-dimensional heat transfer model for parallel-plate thermoacoustic heat exchangers

    NARCIS (Netherlands)

    de Jong, Anne; Wijnant, Ysbrand H.; de Boer, Andries

    2014-01-01

    A one-dimensional (1D) laminar oscillating flow heat transfer model is derived and applied to parallel-plate thermoacoustic heat exchangers. The model can be used to estimate the heat transfer from the solid wall to the acoustic medium, which is required for the heat input/output of thermoacoustic

  14. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  15. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using this mo...

  16. Simplified phase noise model for negative-resistance oscillators and a comparison with feedback oscillator models.

    Science.gov (United States)

    Everard, Jeremy; Xu, Min; Bale, Simon

    2012-03-01

    This paper describes a greatly simplified model for the prediction of phase noise in oscillators which use a negative resistance as the active element. It is based on a simple circuit consisting of the parallel addition of a noise current, a negative admittance/resistance, and a parallel (Qlimited) resonant circuit. The transfer function is calculated as a forward trans-resistance (VOUT/IIN) and then converted to power. The effect of limiting is incorporated by assuming that the phase noise element of the noise floor is kT/2, i.e., -177 dBm/Hz at room temperature. The result is the same as more complex analyses, but enables a simple, clear insight into the operation of oscillators. The phase noise for a given power in the resonator appears to be lower than in feedback oscillators. The reasons for this are explained. Simulation and experimental results are included.

  17. Experimental Behavior Evaluation of Series and Parallel Connected Constant Phase Elements

    KAUST Repository

    Tsirimokou, Georgia

    2017-01-28

    Fractional-order capacitors are the core building blocks for implementing fractional-order circuits. Due to the absence of their commercial availability, they can be approximated through appropriately configured passive or active integer-order element topologies. Such a topology, constructed using Operational Transconductance Amplifiers (OTAs) and capacitors has been implemented in monolithic form through the AMS 0.35μm CMOS process, and the fabricated chips are employed here for the experimental evaluation of the behavior of networks constructed from fractional-order capacitors connected in series or in parallel.

  18. Parallelization of the TRIGRS model for rainfall-induced landslides using the message passing interface

    Science.gov (United States)

    Alvioli, M.; Baum, R.L.

    2016-01-01

    We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.

  19. Asynchronous Parallel Distributed Genetic Algorithm by Layered Server-Client Model

    Science.gov (United States)

    Kojima, Kazunori; Ishigame, Masaaki; Makino, Shozo

    The most popular researches about Parallel GAs are implemented as; Population is devided into some subpopulations, each subpopulation executes GA independently and some individuals are migrated in fixed intervals or fixed probability. On the other hand, Grid Computing has been noticed and a research that implements Parallel GA by using Master-Worker model on Grid Computing has been reported. However, on the huge search space problems, Parallel GA by using Master-Worker model needs a lot of worker to get better solution quality. If there are a lot of workers, the traffic loads to the master. In this paper, we propose Asynchronous Parallel Distributed GA by using Layered Server-Client model. This model is based on Elite Migration on Server-Client model we proposed before. In this model, an Elite Server manages some Subpopulation Clients, and a Master Server manages some Elite Servers. From this structure, the number of Subpopulation Clients that a Elite Server manages is able to be reduced and the traffic on an Elite Server is also able to be reduced. To evaluate our proposed model, we apply to some problems. As the results, we confirm that the fitness is as well as that of current methods and the traffic is less than that of current methods. We also confirm that the migration time is able to be reduced especially in large search space problems.

  20. Error modelling and experimental validation of a planar 3-PPR parallel manipulator with joint clearances

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    This paper deals with the error modelling and analysis of a 3-PPR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Using...... this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements and show the effectiveness of the error prediction model....

  1. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  2. New physics beyond the standard model of particle physics and parallel universes

    International Nuclear Information System (INIS)

    Plaga, R.

    2006-01-01

    It is shown that if-and only if-'parallel universes' exist, an electroweak vacuum that is expected to have decayed since the big bang with a high probability might exist. It would neither necessarily render our existence unlikely nor could it be observed. In this special case the observation of certain combinations of Higgs-boson and top-quark masses-for which the standard model predicts such a decay-cannot be interpreted as evidence for new physics at low energy scales. The question of whether parallel universes exist is of interest to our understanding of the standard model of particle physics

  3. Interaction Admittance Based Modeling of Multi-Paralleled Grid-Connected Inverter with LCL-Filter

    DEFF Research Database (Denmark)

    Lu, Minghui; Blaabjerg, Frede; Wang, Xiongfei

    2016-01-01

    This paper investigates the mutual interaction and stability issues of multi-parallel LCL-filtered inverters. The stability and power quality of multiple grid-tied inverters are gaining more and more research attention as the penetration of renewables increases. In this paper, interactions...... and coupling effects among the multi-paralleled inverters and power grid are explicitly revealed. An Interaction Admittance concept is introduced to express and model the interaction through the physical admittances of the network. Compared to the existing modeling methods, the proposed analysis provides...

  4. Parallel shooting methods for finding steady state solutions to engine simulation models

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2007-01-01

    Parallel single- and multiple shooting methods were tested for finding periodic steady state solutions to a Stirling engine model. The model was used to illustrate features of the methods and possibilities for optimisations. Performance was measured using simulation of an experimental data set...... as test case. A parallel speedup factor of 23 on 33 processors was achieved with multiple shooting. But fast transients at the beginnings of sub intervals caused significant overhead for the multiple shooting methods and limited the best speedup to 3.8 relative to the fastest sequential method: Single...

  5. Dynamic modeling and simulation of a two-stage series-parallel vibration isolation system

    Directory of Open Access Journals (Sweden)

    Rong Guo

    2016-07-01

    Full Text Available A two-stage series-parallel vibration isolation system is already widely used in various industrial fields. However, when the researchers analyze the vibration characteristics of a mechanical system, the system is usually regarded as a single-stage one composed of two substructures. The dynamic modeling of a two-stage series-parallel vibration isolation system using frequency response function–based substructuring method has not been studied. Therefore, this article presents the source-path-receiver model and the substructure property identification model of such a system. These two models make up the transfer path model of the system. And the model is programmed by MATLAB. To verify the proposed transfer path model, a finite element model simulating a vehicle system, which is a typical two-stage series-parallel vibration isolation system, is developed. The substructure frequency response functions and system level frequency response functions can be obtained by MSC Patran/Nastran and LMS Virtual.lab based on the finite element model. Next, the system level frequency response functions are substituted into the transfer path model to predict the substructural frequency response functions and the system response of the coupled structure can then be further calculated. By comparing the predicted results and exact value, the model proves to be correct. Finally, the random noise is introduced into several relevant system level frequency response functions for error sensitivity analysis. The system level frequency response functions that are most sensitive to the random error are found. Since a two-stage series-parallel system has not been well studied, the proposed transfer path model improves the dynamic theory of the multi-stage vibration isolation system. Moreover, the validation process of the model here actually provides an example for acoustic and vibration transfer path analysis based on the proposed model. And it is worth noting that the

  6. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine

    Science.gov (United States)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

  7. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  8. Improvements in image quality with pseudo-parallel imaging in the phase-scrambling fourier transform technique

    International Nuclear Information System (INIS)

    Ito, Satoshi; Kawawa, Yasuhiro; Yamada, Yoshifumi

    2010-01-01

    The signal obtained in the phase-scrambling Fourier transform (PSFT) imaging technique can be transformed to the signal described by the Fresnel transform of the objects, in which the amplitude of the PSFT presents some kind of blurred image of the objects. Therefore, the signal can be considered to exist in the object domain as well as the Fourier domain of the object. This notable feature makes it possible to assign weights to the reconstructed images by applying a weighting function to the PSFT signal after data acquisition, and as a result, pseudo-parallel image reconstruction using these aliased image data with different weights on the images is feasible. In this study, the improvements in image quality with such pseudo-parallel imaging were examined and demonstrated. The weighting function of the PSFT signal that provides a given weight on the image is estimated using the obtained image data and is iteratively updated after sensitivity encoding (SENSE)-based image reconstruction. Simulation studies showed that reconstruction errors were dramatically reduced and that the spatial resolution was also improved in almost all image spaces. The proposed method was applied to signals synthesized from MR image data with phase variations to verify its effectiveness. It was found that the image quality was improved and that images almost entirely free of aliasing artifacts could be obtained. (author)

  9. Parallel and perpendicular lamellar phases in copolymer-nanoparticle multilayer structures

    Energy Technology Data Exchange (ETDEWEB)

    Lauter-Pasyuk, V.; Lauter, H.; Gordeev, G.; Mueller-Buschbaum, P.; Toperverg, B.P.; Petry, W.; Jernenkov, M.; Petrenko, A.; Aksenov, V

    2004-07-15

    Recent results in developing novel nanocomposite multilayer structures are presented. We used symmetric polystryrene-block-polymethylmethacrylate (deuterated) P(S-b-MMAd) lamellar thin films as a self-assembling matrix for the lamellar arrangement of Fe{sub 3}O{sub 4} nanoparticles. Pure copolymer films showed an unusual structure with a perpendicular to the surface orientation of the lamellae, in the part of the film towards the free surface. This is a new phenomenon because up to now this orientation was obtained only on specially prepared substrates. After the incorporation of nanoparticles into the copolymer matrix, the system switched to a lamellar structure parallel to the surface. Further increasing of the nanoparticles concentration led to a more perfect lamellar structure, which shows that the limit for a high concentration of nanoparticles, important for nanotechnology has not yet been reached.

  10. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  11. Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM

    Directory of Open Access Journals (Sweden)

    Nalan Baştürk

    2016-03-01

    Full Text Available This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.

  12. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  13. Analysis of clinical complication data for radiation hepatitis using a parallel architecture model

    International Nuclear Information System (INIS)

    Jackson, A.; Haken, R.K. ten; Robertson, J.M.; Kessler, M.L.; Kutcher, G.J.; Lawrence, T.S.

    1995-01-01

    Purpose: The detailed knowledge of dose volume distributions available from the three-dimensional (3D) conformal radiation treatment of tumors in the liver (reported elsewhere) offers new opportunities to quantify the effect of volume on the probability of producing radiation hepatitis. We aim to test a new parallel architecture model of normal tissue complication probability (NTCP) with these data. Methods and Materials: Complication data and dose volume histograms from a total of 93 patients with normal liver function, treated on a prospective protocol with 3D conformal radiation therapy and intraarterial hepatic fluorodeoxyuridine, were analyzed with a new parallel architecture model. Patient treatment fell into six categories differing in doses delivered and volumes irradiated. By modeling the radiosensitivity of liver subunits, we are able to use dose volume histograms to calculate the fraction of the liver damaged in each patient. A complication results if this fraction exceeds the patient's functional reserve. To determine the patient distribution of functional reserves and the subunit radiosensitivity, the maximum likelihood method was used to fit the observed complication data. Results: The parallel model fit the complication data well, although uncertainties on the functional reserve distribution and subunit radiosensitivy are highly correlated. Conclusion: The observed radiation hepatitis complications show a threshold effect that can be described well with a parallel architecture model. However, additional independent studies are required to better determine the parameters defining the functional reserve distribution and subunit radiosensitivity

  14. effects of parallel channel interactions on two-phase flow split in ...

    African Journals Online (AJOL)

    Dr Obe

    1982-09-01

    Sep 1, 1982 ... system pressures varied from near atmospheric to a little over 1.7 bar. ... history dependent. They depended also on the relative channel orifice restrictions, the state of two-phase mixture in each channel at the start of flow, the manner of initiation of the .... evident that gradual introduction of vapour into the ...

  15. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    Science.gov (United States)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  16. A predator-prey model based on fully parallel cellular automata

    OpenAIRE

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    2003-01-01

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  17. Precise Modeling Based on Dynamic Phasors for Droop-Controlled Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Wang, L.; Guo, X.Q.; Gu, H.R.

    2012-01-01

    This paper deals with the precise modeling of droop controlled parallel inverters. This is very attractive since that is a common structure that can be found in a stand-alone droopcontrolled MicroGrid. The conventional small-signal dynamic is not able to predict instabilities of the system, so...

  18. Cocaine Use and Delinquent Behavior among High-Risk Youths: A Growth Model of Parallel Processes

    Science.gov (United States)

    Dembo, Richard; Sullivan, Christopher

    2009-01-01

    We report the results of a parallel-process, latent growth model analysis examining the relationships between cocaine use and delinquent behavior among youths. The study examined a sample of 278 justice-involved juveniles completing at least one of three follow-up interviews as part of a National Institute on Drug Abuse-funded study. The results…

  19. Toward a model framework of generalized parallel componential processing of multi-symbol numbers.

    Science.gov (United States)

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-05-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. (c) 2015 APA, all rights reserved).

  20. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  1. CSDFa: a model for exploiting the trade-off between data and pipeline parallelism

    NARCIS (Netherlands)

    Koek, Peter; Geuns, S.J.; Hausmans, J.P.H.M.; Corporaal, Henk; Bekooij, Marco Jan Gerrit

    2016-01-01

    Real-time stream processing applications, such as SDR applications, are often executed concurrently on multiprocessor systems. A unified data flow model and analysis method have been proposed that can be used to simultaneously determine the amount of pipeline and coarse-grained data parallelism

  2. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2017-11-22

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  3. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  4. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  5. Interdisciplinary Science through the Parallel Curriculum Model: Lessons from the Sea

    Science.gov (United States)

    Hathcock, Stephanie J.

    2018-01-01

    The Parallel Curriculum Model (PCM) lends itself to considering curriculum development from different angles. It begins with a solid Core Curriculum and can then be extended through the Curriculum of Connections, Practice, and Identity. This article showcases a way of thinking about the creation of a PCM unit by providing examples from an…

  6. Modeling of parallel-plate regenerators with non-uniform plate distributions

    DEFF Research Database (Denmark)

    Jensen, Jesper Buch; Engelbrecht, Kurt; Bahl, Christian Robert Haffenden

    2010-01-01

    A two-dimensional finite element model describing the performance of parallel-plate regenerators with arbitrary channel width distributions has been developed in order to investigate the effect of non-uniform plate spacing on the performance of regenerators. Results for a series of hypothetical...

  7. Sustainability Attitudes and Behavioral Motivations of College Students: Testing the Extended Parallel Process Model

    Science.gov (United States)

    Perrault, Evan K.; Clark, Scott K.

    2018-01-01

    Purpose: A planet that can no longer sustain life is a frightening thought--and one that is often present in mass media messages. Therefore, this study aims to test the components of a classic fear appeal theory, the extended parallel process model (EPPM) and to determine how well its constructs predict sustainability behavioral intentions. This…

  8. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling

    Science.gov (United States)

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2010-01-01

    This study investigated a method to evaluate mediational processes using latent growth curve modeling. The mediator and the outcome measured across multiple time points were viewed as 2 separate parallel processes. The mediational process was defined as the independent variable influencing the growth of the mediator, which, in turn, affected the growth of the outcome. To illustrate modeling procedures, empirical data from a longitudinal drug prevention program, Adolescents Training and Learning to Avoid Steroids, were used. The program effects on the growth of the mediator and the growth of the outcome were examined first in a 2-group structural equation model. The mediational process was then modeled and tested in a parallel process latent growth curve model by relating the prevention program condition, the growth rate factor of the mediator, and the growth rate factor of the outcome. PMID:20157639

  9. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  10. A Scheduling-Based Framework for Efficient Massively Parallel Execution, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Modeling and simulation on high-end computing systems has grown increasingly complex in recent years as both models and computer systems continue to advance. The...

  11. Parallelization of a Quantum-Classic Hybrid Model For Nanoscale Semiconductor Devices

    Directory of Open Access Journals (Sweden)

    Oscar Salas

    2011-07-01

    Full Text Available The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve in a rapid way the performance of a numerical serial code for the simulation of the transport of a charged carriers in a Double-Gate MOSFET. We introduce the Drift-Diffusion-Schrödinger-Poisson (DDSP model and we study a rapid parallelization strategy of the numerical procedure on shared memory architectures.

  12. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  13. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    Science.gov (United States)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  14. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    Science.gov (United States)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  15. MIST: An Open Source Environmental Modelling Programming Language Incorporating Easy to Use Data Parallelism.

    Science.gov (United States)

    Bellerby, Tim

    2014-05-01

    Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.

  16. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  17. Phases and phase transitions in the algebraic microscopic shell model

    Directory of Open Access Journals (Sweden)

    Georgieva A. I.

    2016-01-01

    Full Text Available We explore the dynamical symmetries of the shell model number conserving algebra, which define three types of pairing and quadrupole phases, with the aim to obtain the prevailing phase or phase transition for the real nuclear systems in a single shell. This is achieved by establishing a correspondence between each of the pairing bases with the Elliott’s SU(3 basis that describes collective rotation of nuclear systems. This allows for a complete classification of the basis states of different number of particles in all the limiting cases. The probability distribution of the SU(3 basis states within theirs corresponding pairing states is also obtained. The relative strengths of dynamically symmetric quadrupole-quadrupole interaction in respect to the isoscalar, isovector and total pairing interactions define a control parameter, which estimates the importance of each term of the Hamiltonian in the correct reproduction of the experimental data for the considered nuclei.

  18. Parallel Solution-Phase Synthesis and General Biological Activity of a Uridine Antibiotic Analog Library

    OpenAIRE

    Moukha-chafiq, Omar; Reynolds, Robert C.

    2014-01-01

    A small library of ninety four uridine antibiotic analogs was synthesized, under the Pilot Scale Library (PSL) Program of the NIH Roadmap initiative, from amine 2 and carboxylic acids 33 and 77 in solution-phase fashion. Diverse aldehyde, sulfonyl chloride, and carboxylic acid reactant sets were condensed to 2, leading after acid-mediated hydrolysis, to the targeted compounds 3?32 in good yields and high purity. Similarly, treatment of 33 with diverse amines and sulfonamides gave 34?75. The c...

  19. Evaluation of alias-less reconstruction by pseudo-parallel imaging in a phase-scrambling fourier transform technique

    International Nuclear Information System (INIS)

    Ito, Satoshi; Kawawa, Yasuhiro; Yamada, Yoshifumi

    2010-01-01

    We propose an image reconstruction technique in which parallel image reconstruction is performed based on the sensitivity encoding (SENSE) algorithm using only a single set of signals. The signal obtained in the phase-scrambling Fourier transform (PSFT) imaging technique can be transformed to the signal described by the Fresnel transform of the objects, which is known as the diffracted wave-front equation of the object in acoustics or optics. Since the Fresnel transform is a convolution integral on the object space, the space where the PSFT signal exists can be considered as both in the Fourier domain and in the object domain. This notable feature indicates that weighting functions corresponding to the sensitivity of radiofrequency (RF) coils can be approximately given in the PSFT signal space. Therefore, we can obtain two folded images from a single set of signals with different weighting functions, and image reconstruction based on the SENSE parallel imaging algorithm is possible using a series of folded images. Simulation and experimental studies showed that almost alias-free images can be synthesized using a single signal that does not satisfy the sampling theorem. (author)

  20. Interaction and aggregated modeling of multiple paralleled inverters with LCL filter

    DEFF Research Database (Denmark)

    Lu, Minghui; Wang, Xiongfei; Loh, Poh Chiang

    2015-01-01

    This paper discusses the dynamic interaction of multi-paralleled inverters within a weak grid. Interactive current and common current models are proposed to explain the interaction among these inverters, which are studied with both open loop and closed loop analysis. An aggregated model is propos...... to describe the totality of multi-inverters. Additionally, system stability is explicitly studied and classified as interactively and commonly stable. The study is validated by simulations and experiments....

  1. [Method of Entirely Parallel Differential Evolution for Model Adaptation in Systems Biology].

    Science.gov (United States)

    Kozlov, K N; Samsonov, A M; Samsonova, M G

    2015-01-01

    We developed a method of entirely parallel differential evolution for identification of unknown parameters of mathematical models by minimization of the objective function that describes the discrepancy of the model solution and the experimental data. The method is implemented in the free and open source software available on the Internet. The method demonstrated a good performance comparable to the top three methods from CEC-2014 and was successfully applied to several biological problems.

  2. Cirrus Parcel Model Comparison Phase 2

    Science.gov (United States)

    Lin, Ruei-Fong; Starr, David OC.; DeMott, Paul J.; Cotton, Richard; Jensen, Eric; Kaercher, Bernd; Liu, Xiaohong

    2002-01-01

    The Cirrus Parcel Model Comparison (CPMC) project, a project of the GEWEX Cloud System Study Working Group on cirrus clouds (GCSS WG2), is an international effort to advance our knowledge of numerical simulations of cirrus cloud initiation. This project was done in two phases. In Phase 1 of CPMC, the critical components determining the predicted cloud microphysical properties were identified using parcel models in which the aerosol and ice crystal size distributions are explicitly resolved, the formulation of the homogeneous freezing of aqueous solution droplets, especially the gradient of nucleation rate with respect to solution concentration; aerosol growth modeling; and the mass accommodation coefficient of water vapor on ice surface (the deposition coefficient). In Phase 1, all simulations were conducted using a given background aerosol distribution. To complete the comparison study, participant model responses to a range of background aerosol distributions are investigated in Phase 2.

  3. Phase transition in the hadron gas model

    International Nuclear Information System (INIS)

    Gorenstein, M.I.; Petrov, V.K.; Zinov'ev, G.M.

    1981-01-01

    A class of statistical models of hadron gas allowing an analytical solution is considered. A mechanism of a possible phase transition in such a system is found and conditions for its occurence are determined [ru

  4. PHASE CHAOS IN THE DISCRETE KURAMOTO MODEL

    DEFF Research Database (Denmark)

    Maistrenko, V.; Vasylenko, A.; Maistrenko, Y.

    2010-01-01

    The paper describes the appearance of a novel, high-dimensional chaotic regime, called phase chaos, in a time-discrete Kuramoto model of globally coupled phase oscillators. This type of chaos is observed at small and intermediate values of the coupling strength. It arises from the nonlinear...... interaction among the oscillators, while the individual oscillators behave periodically when left uncoupled. For the four-dimensional time-discrete Kuramoto model, we outline the region of phase chaos in the parameter plane and determine the regions where phase chaos coexists with different periodic...... attractors. We also study the subcritical frequency-splitting bifurcation at the onset of desynchronization and demonstrate that the transition to phase chaos takes place via a torus destruction process....

  5. Search for inhomogeneous phases in fermionic models

    Science.gov (United States)

    Braun, Jens; Finkbeiner, Stefan; Karbstein, Felix; Roscher, Dietrich

    2015-06-01

    We revisit the Gross-Neveu model with N fermion flavors in 1 +1 dimensions and compute its phase diagram at finite temperature and chemical potential in the large-N limit. To this end, we double the number of fermion degrees of freedom in a specific way which allows us to detect inhomogeneous phases in an efficient manner. We show analytically that this "fermion doubling trick" predicts correctly the position of the boundary between the chirally symmetric phase and the phase with broken chiral symmetry. Most importantly, we find that the emergence of an inhomogeneous ground state is predicted correctly. We critically analyze our approach based on this trick and discuss its applicability to other theories, such as fermionic models in higher dimensions, where it may be used to guide the search for inhomogeneous phases.

  6. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    Science.gov (United States)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  7. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    Science.gov (United States)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  8. A parallel domain decomposition algorithm for coastal ocean circulation models based on integer linear programming

    Science.gov (United States)

    Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan

    2017-05-01

    This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.

  9. Seismic waves modeling with the Fourier pseudo-spectral method on massively parallel machines.

    Science.gov (United States)

    Klin, Peter

    2015-04-01

    The Fourier pseudo-spectral method (FPSM) is an approach for the 3D numerical modeling of the wave propagation, which is based on the discretization of the spatial domain in a structured grid and relies on global spatial differential operators for the solution of the wave equation. This last peculiarity is advantageous from the accuracy point of view but poses difficulties for an efficient implementation of the method to be run on parallel computers with distributed memory architecture. The 1D spatial domain decomposition approach has been so far commonly adopted in the parallel implementations of the FPSM, but it implies an intensive data exchange among all the processors involved in the computation, which can degrade the performance because of communication latencies. Moreover, the scalability of the 1D domain decomposition is limited, since the number of processors can not exceed the number of grid points along the directions in which the domain is partitioned. This limitation inhibits an efficient exploitation of the computational environments with a very large number of processors. In order to overcome the limitations of the 1D domain decomposition we implemented a parallel version of the FPSM based on a 2D domain decomposition, which allows to achieve a higher degree of parallelism and scalability on massively parallel machines with several thousands of processing elements. The parallel programming is essentially achieved using the MPI protocol but OpenMP parts are also included in order to exploit the single processor multi - threading capabilities, when available. The developed tool is aimed at the numerical simulation of the seismic waves propagation and in particular is intended for earthquake ground motion research. We show the scalability tests performed up to 16k processing elements on the IBM Blue Gene/Q computer at CINECA (Italy), as well as the application to the simulation of the earthquake ground motion in the alluvial plain of the Po river (Italy).

  10. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    International Nuclear Information System (INIS)

    Paćko, P; Bielak, T; Staszewski, W J; Uhl, T; Spencer, A B; Worden, K

    2012-01-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering. (paper)

  11. Dynamic Modelling and Trajectory Tracking of Parallel Manipulator with Flexible Link

    Directory of Open Access Journals (Sweden)

    Chen Zhengsheng

    2013-09-01

    Full Text Available This paper mainly focuses on dynamic modelling and real-time control for a parallel manipulator with flexible link. The Lagrange principle and assumed modes method (AMM substructure technique is presented to formulate the dynamic modelling of a two-degrees-of-freedom (DOF parallel manipulator with flexible links. Then, the singular perturbation technique (SPT is used to decompose the nonlinear dynamic system into slow time-scale and fast time-scale subsystems. Furthermore, the SPT is employed to transform the differential algebraic equations (DAEs for kinematic constraints into explicit ordinary differential equations (ODEs, which makes real-time control possible. In addition, a novel composite control scheme is presented; the computed torque control is applied for a slow subsystem and the H∞ technique for the fast subsystem, taking account of the model uncertainty and outside disturbance. The simulation results show the composite control can effectively achieve fast and accurate tracking control.

  12. Parallel decomposition and adaptive differencing issues in the whole core modeling of the OSURR

    International Nuclear Information System (INIS)

    Kennedy, Ryanne; Aldemir, Tunc; Sjoden, Glenn

    2008-01-01

    The Ohio State University Research Reactor (OSURR) is an integral part of the work and studies of the Nuclear Engineering community at the university. The Innovations in Nuclear Infrastructure and Education program was established with the objectives of encouraging new and innovative use for university research reactors. With the goals of this program and those of the OSU NE Graduate Program in mind, a full core model of the OSURR was assembled using the PENTRAN parallel S N code-. Good agreement was achieved between the deterministic and Monte Carlo results. As a part of the model construction process, several parametric analyses that influenced parallel execution were performed to improve the calculation time and accuracy of the model results. (authors)

  13. Solid-Phase Parallel Synthesis of Functionalised Medium-to-Large Cyclic Peptidomimetics through Three-Component Coupling Driven by Aziridine Aldehyde Dimers.

    Science.gov (United States)

    Treder, Adam P; Hickey, Jennifer L; Tremblay, Marie-Claude J; Zaretsky, Serge; Scully, Conor C G; Mancuso, John; Doucet, Annie; Yudin, Andrei K; Marsault, Eric

    2015-06-15

    The first solid-phase parallel synthesis of macrocyclic peptides using three-component coupling driven by aziridine aldehyde dimers is described. The method supports the synthesis of 9- to 18-membered aziridine-containing macrocycles, which are then functionalized by nucleophilic opening of the aziridine ring. This constitutes a robust approach for the rapid parallel synthesis of macrocyclic peptides. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Dynamic modelling of a 3-CPU parallel robot via screw theory

    Directory of Open Access Journals (Sweden)

    L. Carbonari

    2013-04-01

    Full Text Available The article describes the dynamic modelling of I.Ca.Ro., a novel Cartesian parallel robot recently designed and prototyped by the robotics research group of the Polytechnic University of Marche. By means of screw theory and virtual work principle, a computationally efficient model has been built, with the final aim of realising advanced model based controllers. Then a dynamic analysis has been performed in order to point out possible model simplifications that could lead to a more efficient run time implementation.

  15. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    Directory of Open Access Journals (Sweden)

    Lili Tian

    2016-10-01

    Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  16. Enhancing data parallel aplications with task parallelism

    OpenAIRE

    Fernández, Jacqueline; Guerrero, Roberto A.; Piccoli, María Fabiana; Printista, Alicia Marcela; Villalobos, M.

    2001-01-01

    Most parallel applications contain data parallelism and almost all discussion of its solutions has limited to the simplest and least expressive form: flat data parallelism. Several generalization of the flat data parallel model have been proposed because a large number of those applications need a combination of task and data parallelism to represent their natural computation structure and to achieve good performance in their results. Their aim is to allow the capability of combining the easi...

  17. Three-dimensional parallel edge-based finite element modeling of electromagnetic data with field redatuming

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Čuma, Martin; Zhdanov, Michael

    2015-01-01

    This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom signific......This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom...... significantly. The linear system of finite element equations is solved using parallel direct solvers which are robust for ill-conditioned systems and efficient for multiple source electromagnetic (EM) modeling. We also introduce a novel approach to compute the scalar components of the electric field from...... the tangential components along each edge based on field redatuming. The method can produce a more accurate result as compared to conventional approach. We have applied the developed algorithm to compute the EM response for a typical 3D anisotropic geoelectrical model of the off-shore HC reservoir with complex...

  18. Polarimetry of transiting planets: Differences between plane-parallel and spherical host star atmosphere models

    Science.gov (United States)

    Kostogryz, N. M.; Yakobchuk, T. M.; Berdyugina, S. V.; Milic, I.

    2017-05-01

    Context. To properly interpret photometric and polarimetric observations of exoplanetary transits, accurate calculations of center-to-limb variations of intensity and linear polarization of the host star are needed. These variations, in turn, depend on the choice of geometry of stellar atmosphere. Aims: We want to understand the dependence of the flux and the polarization curves during a transit on the choice of the applied approximation for the stellar atmosphere: spherical and plane-parallel. We examine whether simpler plane-parallel models of stellar atmospheres are good enough to interpret the flux and the polarization light curves during planetary transits, or whether more complicated spherical models should be used. Methods: Linear polarization during a transit appears because a planet eclipses a stellar disk and thus breaks left-right symmetry. We calculate the flux and the polarization variations during a transit with given center-to-limb variations of intensity and polarization. Results: We calculate the flux and the polarization variations during transit for a sample of 405 extrasolar systems. Most of them show higher transit polarization for the spherical stellar atmosphere. Our calculations reveal a group of exoplanetary systems that demonstrates lower maximum polarization during the transits with spherical model atmospheres of host stars with effective temperatures of Teff = 4400-5400 K and surface gravity of log g = 4.45-4.65 than that obtained with plane-parallel atmospheres. Moreover, we have found two trends of the transit polarization. The first trend is a decrease in the polarization calculated with spherical model atmosphere of host stars with effective temperatures Teff = 3500-5100 K, and the second shows an increase in the polarization for host stars with Teff = 5100-7000 K. These trends can be explained by the relative variation of temperature and pressure dependences in the plane-parallel and spherical model atmospheres. Conclusions: For

  19. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  20. Mass-conserving subglacial hydrology in the Parallel Ice Sheet Model version 0.6

    Science.gov (United States)

    Bueler, E.; van Pelt, W.

    2015-06-01

    We describe and test a two-horizontal-dimension subglacial hydrology model which combines till with a distributed system of water-filled, linked cavities which open through sliding and close through ice creep. The addition of this sub-model to the Parallel Ice Sheet Model (PISM) accomplishes three specific goals: (a) conservation of the mass of water, (b) simulation of spatially and temporally variable basal shear stress from physical mechanisms based on a minimal number of free parameters, and (c) convergence under grid refinement. The model is a common generalization of four others: (i) the undrained plastic bed model of Tulaczyk et al. (2000b), (ii) a standard "routing" model used for identifying locations of subglacial lakes, (iii) the lumped englacial-subglacial model of Bartholomaus et al. (2011), and (iv) the elliptic-pressure-equation model of Schoof et al. (2012). We preserve physical bounds on the pressure. In steady state a functional relationship between water amount and pressure emerges. We construct an exact solution of the coupled, steady equations and use it for verification of our explicit time stepping, parallel numerical implementation. We demonstrate the model at scale by 5 year simulations of the entire Greenland ice sheet at 2 km horizontal resolution, with one million nodes in the hydrology grid.

  1. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    Science.gov (United States)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  2. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  3. 3-D Parallel Simulation Model of Continuous Beam-Electron Cloud Interactions

    CERN Document Server

    Ghalam, Ali F; Decyk, Viktor K; Huang Cheng Kun; Katsouleas, Thomas C; Mori, Warren; Rumolo, Giovanni; Zimmermann, Frank

    2005-01-01

    A 3D Particle-In-Cell model for continuous modeling of beam and electron cloud interaction in a circular accelerator is presented. A simple model for lattice structure, mainly the Quadruple and dipole magnets and chromaticity have been added to a plasma PIC code, QuickPIC, used extensively to model plasma wakefield acceleration concept. The code utilizes parallel processing techniques with domain decomposition in both longitudinal and transverse domains to overcome the massive computational costs of continuously modeling the beam-cloud interaction. Through parallel modeling, we have been able to simulate long-term beam propagation in the presence of electron cloud in many existing and future circular machines around the world. The exact dipole lattice structure has been added to the code and the simulation results for CERN-SPS and LHC with the new lattice structure have been studied. Also the simulation results are compared to the results from the two macro-particle modeling for strong head-tail instability. ...

  4. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    Science.gov (United States)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  5. An Ant Optimization Model for Unrelated Parallel Machine Scheduling with Energy Consumption and Total Tardiness

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2015-01-01

    Full Text Available This research considers an unrelated parallel machine scheduling problem with energy consumption and total tardiness. This problem is compounded by two challenges: differences of unrelated parallel machines energy consumption and interaction between job assignments and machine state operations. To begin with, we establish a mathematical model for this problem. Then an ant optimization algorithm based on ATC heuristic rule (ATC-ACO is presented. Furthermore, optimal parameters of proposed algorithm are defined via Taguchi methods for generating test data. Finally, comparative experiments indicate the proposed ATC-ACO algorithm has better performance on minimizing energy consumption as well as total tardiness and the modified ATC heuristic rule is more effectively on reducing energy consumption.

  6. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    Science.gov (United States)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  7. A Phase Separation Model for Transcriptional Control.

    Science.gov (United States)

    Hnisz, Denes; Shrinivas, Krishna; Young, Richard A; Chakraborty, Arup K; Sharp, Phillip A

    2017-03-23

    Phase-separated multi-molecular assemblies provide a general regulatory mechanism to compartmentalize biochemical reactions within cells. We propose that a phase separation model explains established and recently described features of transcriptional control. These features include the formation of super-enhancers, the sensitivity of super-enhancers to perturbation, the transcriptional bursting patterns of enhancers, and the ability of an enhancer to produce simultaneous activation at multiple genes. This model provides a conceptual framework to further explore principles of gene control in mammals. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    Science.gov (United States)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  9. Measuring effectiveness of a university by a parallel network DEA model

    Science.gov (United States)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2017-11-01

    Universities contribute significantly to the development of human capital and socio-economic improvement of a country. Due to that, Malaysian universities carried out various initiatives to improve their performance. Most studies have used the Data Envelopment Analysis (DEA) model to measure efficiency rather than effectiveness, even though, the measurement of effectiveness is important to realize how effective a university in achieving its ultimate goals. A university system has two major functions, namely teaching and research and every function has different resources based on its emphasis. Therefore, a university is actually structured as a parallel production system with its overall effectiveness is the aggregated effectiveness of teaching and research. Hence, this paper is proposing a parallel network DEA model to measure the effectiveness of a university. This model includes internal operations of both teaching and research functions into account in computing the effectiveness of a university system. In literature, the graduate and the number of program offered are defined as the outputs, then, the employed graduates and the numbers of programs accredited from professional bodies are considered as the outcomes for measuring the teaching effectiveness. Amount of grants is regarded as the output of research, while the different quality of publications considered as the outcomes of research. A system is considered effective if only all functions are effective. This model has been tested using a hypothetical set of data consisting of 14 faculties at a public university in Malaysia. The results show that none of the faculties is relatively effective for the overall performance. Three faculties are effective in teaching and two faculties are effective in research. The potential applications of the parallel network DEA model allow the top management of a university to identify weaknesses in any functions in their universities and take rational steps for improvement.

  10. Shear-induced parallel-to-perpendicular orientation transition in the amphiphilic lamellar phase: a nonequilibrium molecular-dynamics simulation study.

    Science.gov (United States)

    Guo, Hongxia

    2006-02-07

    The present work is devoted to a study of the shear-induced parallel-to-perpendicular orientation transition in the lamellar system by the large-scale nonequilibrium molecular-dynamics (NEMD) simulation. An effective generic model-A2B2 tetramer for amphiphilies is used. The NEMD simulation produces unambiguous evidence that undulation instability along the vorticity direction sets in well above a critical shear rate and grows in magnitude as the shear rate is further increased. At a certain high shear rate, the coherent undulation instability grows so large that defects are nucleated and the global lamellar monodomain breaks into several aligned lamellar domains. Subsequently layers in these domains rotate into the perpendicular orientation with the rotation of chains towards the y direction, merge into a global perpendicular-aligned lamellar monodomain, and organize into a perfect well-aligned perpendicular lamellar phase by the migration and annihilation of edge dislocations and disclinations. The macroscopic observable viscosity as a function of time or shear rate is correlated with the structural response such as the mesoscopic domain morphology and the microscopic chain conformation. The onset of undulation instability concurs with the start-up of shear-thinning behavior. During the orientation transformation at the high shear rate, the complex time-dependent thixotropic behavior is observed. The smaller viscosity in the perpendicular lamellar phase gives an energetic reason for the shear-induced orientation transition.

  11. A Fault-Tolerant Parallel Structure of Single-Phase Full-Bridge Rectifiers for a Wound-Field Doubly Salient Generator

    DEFF Research Database (Denmark)

    Chen, Zhihui; Chen, Ran; Chen, Zhe

    2013-01-01

    The fault-tolerance design is widely adopted for high-reliability applications. In this paper, a parallel structure of single-phase full-bridge rectifiers (FBRs) (PS-SPFBR) is proposed for a wound-field doubly salient generator. The analysis shows the potential fault-tolerance capability of the PS...

  12. GaAs mixed signal multi-function X-band MMIC with 7 bit phase and amplitude control and integrated serial to parallel converter

    NARCIS (Netherlands)

    Boer, A. de; Mouthaan, K.

    2000-01-01

    The design and measured performance of a GaAs multi-function X-band MMIC for spacebased synthetic aperture radar (SAR) applications with 7-bit phase and amplitude control and integrated serial to parallel converter (including level conversion) is presented. The main application for the

  13. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very

  14. On Affine Fusion and the Phase Model

    Directory of Open Access Journals (Sweden)

    Mark A. Walton

    2012-11-01

    Full Text Available A brief review is given of the integrable realization of affine fusion discovered recently by Korff and Stroppel. They showed that the affine fusion of the su(n Wess-Zumino-Novikov-Witten (WZNW conformal field theories appears in a simple integrable system known as the phase model. The Yang-Baxter equation leads to the construction of commuting operators as Schur polynomials, with noncommuting hopping operators as arguments. The algebraic Bethe ansatz diagonalizes them, revealing a connection to the modular S matrix and fusion of the su(n WZNW model. The noncommutative Schur polynomials play roles similar to those of the primary field operators in the corresponding WZNW model. In particular, their 3-point functions are the su(n fusion multiplicities. We show here how the new phase model realization of affine fusion makes obvious the existence of threshold levels, and how it accommodates higher-genus fusion.

  15. Random matrix models for phase diagrams

    International Nuclear Information System (INIS)

    Vanderheyden, B; Jackson, A D

    2011-01-01

    We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied.

  16. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  17. Error modeling and tolerance design of a parallel manipulator with full-circle rotation

    Directory of Open Access Journals (Sweden)

    Yanbing Ni

    2016-05-01

    Full Text Available A method for improving the accuracy of a parallel manipulator with full-circle rotation is systematically investigated in this work via kinematic analysis, error modeling, sensitivity analysis, and tolerance allocation. First, a kinematic analysis of the mechanism is made using the space vector chain method. Using the results as a basis, an error model is formulated considering the main error sources. Position and orientation error-mapping models are established by mathematical transformation of the parallelogram structure characteristics. Second, a sensitivity analysis is performed on the geometric error sources. A global sensitivity evaluation index is proposed to evaluate the contribution of the geometric errors to the accuracy of the end-effector. The analysis results provide a theoretical basis for the allocation of tolerances to the parts of the mechanical design. Finally, based on the results of the sensitivity analysis, the design of the tolerances can be solved as a nonlinearly constrained optimization problem. A genetic algorithm is applied to carry out the allocation of the manufacturing tolerances of the parts. Accordingly, the tolerance ranges for nine kinds of geometrical error sources are obtained. The achievements made in this work can also be applied to other similar parallel mechanisms with full-circle rotation to improve error modeling and design accuracy.

  18. Improved Path Loss Simulation Incorporating Three-Dimensional Terrain Model Using Parallel Coprocessors

    Directory of Open Access Journals (Sweden)

    Zhang Bin Loo

    2017-01-01

    Full Text Available Current network simulators abstract out wireless propagation models due to the high computation requirements for realistic modeling. As such, there is still a large gap between the results obtained from simulators and real world scenario. In this paper, we present a framework for improved path loss simulation built on top of an existing network simulation software, NS-3. Different from the conventional disk model, the proposed simulation also considers the diffraction loss computed using Epstein and Peterson’s model through the use of actual terrain elevation data to give an accurate estimate of path loss between a transmitter and a receiver. The drawback of high computation requirements is relaxed by offloading the computationally intensive components onto an inexpensive off-the-shelf parallel coprocessor, which is a NVIDIA GPU. Experiments are performed using actual terrain elevation data provided from United States Geological Survey. As compared to the conventional CPU architecture, the experimental result shows that a speedup of 20x to 42x is achieved by exploiting the parallel processing of GPU to compute the path loss between two nodes using terrain elevation data. The result shows that the path losses between two nodes are greatly affected by the terrain profile between these two nodes. Besides this, the result also suggests that the common strategy to place the transmitter in the highest position may not always work.

  19. A generic simulation cell method for developing extensible, efficient and readable parallel computational models

    Science.gov (United States)

    Honkonen, I.

    2015-03-01

    I present a method for developing extensible and modular computational models without sacrificing serial or parallel performance or source code readability. By using a generic simulation cell method I show that it is possible to combine several distinct computational models to run in the same computational grid without requiring modification of existing code. This is an advantage for the development and testing of, e.g., geoscientific software as each submodel can be developed and tested independently and subsequently used without modification in a more complex coupled program. An implementation of the generic simulation cell method presented here, generic simulation cell class (gensimcell), also includes support for parallel programming by allowing model developers to select which simulation variables of, e.g., a domain-decomposed model to transfer between processes via a Message Passing Interface (MPI) library. This allows the communication strategy of a program to be formalized by explicitly stating which variables must be transferred between processes for the correct functionality of each submodel and the entire program. The generic simulation cell class requires a C++ compiler that supports a version of the language standardized in 2011 (C++11). The code is available at https://github.com/nasailja/gensimcell for everyone to use, study, modify and redistribute; those who do are kindly requested to acknowledge and cite this work.

  20. Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration

    Science.gov (United States)

    Zhang, Y.; Key, K.; Ovall, J.; Holst, M.

    2014-12-01

    We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented

  1. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces......, they require very little communication between processors, and are fast in practice on models with a small state space. We have tested our implementation against two other imple- mentations on artificial data and observe a speed-up of roughly a factor of 5 for the forward algorithm and more than 6...... for the Viterbi algorithm. We also tested our algorithm in the Coalescent Hidden Markov Model framework, where it gave a significant speed-up....

  2. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  3. New 3D parallel GILD electromagnetic modeling and nonlinear inversion using global magnetic integral and local differential equation

    Energy Technology Data Exchange (ETDEWEB)

    Xie, G.; Li, J.; Majer, E.; Zuo, D.

    1998-07-01

    This paper describes a new 3D parallel GILD electromagnetic (EM) modeling and nonlinear inversion algorithm. The algorithm consists of: (a) a new magnetic integral equation instead of the electric integral equation to solve the electromagnetic forward modeling and inverse problem; (b) a collocation finite element method for solving the magnetic integral and a Galerkin finite element method for the magnetic differential equations; (c) a nonlinear regularizing optimization method to make the inversion stable and of high resolution; and (d) a new parallel 3D modeling and inversion using a global integral and local differential domain decomposition technique (GILD). The new 3D nonlinear electromagnetic inversion has been tested with synthetic data and field data. The authors obtained very good imaging for the synthetic data and reasonable subsurface EM imaging for the field data. The parallel algorithm has high parallel efficiency over 90% and can be a parallel solver for elliptic, parabolic, and hyperbolic modeling and inversion. The parallel GILD algorithm can be extended to develop a high resolution and large scale seismic and hydrology modeling and inversion in the massively parallel computer.

  4. Modeling of Phase Equilibria Containing Associating Fluids

    DEFF Research Database (Denmark)

    Derawi, Samer; Kontogeorgis, Georgios

    glycol + heptane, methylcyclohexane, hexane, propylene glycol + heptane, diethylene glycol + heptane, triethylene glycol + heptane, and tetraethylene glycol + heptane. The data obtained were correlated with the NRTL model and two different versions of the UNIQUAC equation. The NRTL model and one...... in terms of an activity coefficient model or an equation of state. Our target in this thesis is to review and develop such models capable of describing qualitatively as well as quantitatively phase equilibria in multicomponent multiphase systems containing non-polar, polar, and associating compounds...... coefficient) calculations has been carried out. UNIFAC is an activity coefficient model while AFC is a model specifically developed for Pow calculations. Five different versions of UNIFAC and the AFC correlation model have been compared with each other and with experimental data. The range of applicability...

  5. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias.

    Science.gov (United States)

    Trippas, Dries; Thompson, Valerie A; Handley, Simon J

    2017-05-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was valid on half the trials or to decide whether the conclusion was believable on the other half. When belief and logic conflict, the default-interventionist view predicts that it should take less time to respond on the basis of belief than logic, and that the believability of a conclusion should interfere with judgments of validity, but not the reverse. The parallel-processing view predicts that beliefs should interfere with logic judgments only if the processing required to evaluate the logical structure exceeds that required to evaluate the knowledge necessary to make a belief-based judgment, and vice versa otherwise. Consistent with this latter view, for the simplest reasoning problems (modus ponens), judgments of belief resulted in lower accuracy than judgments of validity, and believability interfered more with judgments of validity than the converse. For problems of moderate complexity (modus tollens and single-model syllogisms), the interference was symmetrical, in that validity interfered with belief judgments to the same degree that believability interfered with validity judgments. For the most complex (three-term multiple-model syllogisms), conclusion believability interfered more with judgments of validity than vice versa, in spite of the significant interference from conclusion validity on judgments of belief.

  6. On the Control of Automatic Processes: A Parallel Distributed Processing Model of the Stroop Effect

    Science.gov (United States)

    1988-06-16

    F.N. (1973). The Stroop phenomenon and its use in the study of perceptual, cognitive , and response processes. Memory and Cognition , 1, 106-120. Gatti...189-207. Logan, G.D. (1980). Attention and automaticity in Stroop and priming tasks: Theory and data. Cognitive Psychology, 12, 523-553. Logan, D.G...Dlh’i! FILE COI’_ C0 ON THE CONTROL OF AUTOMATIC PROCESSES: (N A PARALLEL DISTRIBUTED PROCESSING MODEL OF THE STROOP EFFECT Technical Report AIP - 40

  7. Investigation of the charging characteristics of micrometer sized droplets based on parallel plate capacitor model.

    Science.gov (United States)

    Zhang, Yanzhen; Liu, Yonghong; Wang, Xiaolong; Shen, Yang; Ji, Renjie; Cai, Baoping

    2013-02-05

    The charging characteristics of micrometer sized aqueous droplets have attracted more and more attentions due to the development of the microfluidics technology since the electrophoretic motion of a charged droplet can be used as the droplet actuation method. This work proposed a novel method of investigating the charging characteristics of micrometer sized aqueous droplets based on parallel plate capacitor model. With this method, the effects of the electric field strength, electrolyte concentration, and ion species on the charging characteristics of the aqueous droplets was investigated. Experimental results showed that the charging characteristics of micrometer sized droplets can be investigated by this method.

  8. LMFAO! Humor as a Response to Fear: Decomposing Fear Control within the Extended Parallel Process Model

    Science.gov (United States)

    Abril, Eulàlia P.; Szczypka, Glen; Emery, Sherry L.

    2017-01-01

    This study seeks to analyze fear control responses to the 2012 Tips from Former Smokers campaign using the Extended Parallel Process Model (EPPM). The goal is to examine the occurrence of ancillary fear control responses, like humor. In order to explore individuals’ responses in an organic setting, we use Twitter data—tweets—collected via the Firehose. Content analysis of relevant fear control tweets (N = 14,281) validated the existence of boomerang responses within the EPPM: denial, defensive avoidance, and reactance. More importantly, results showed that humor tweets were not only a significant occurrence but constituted the majority of fear control responses. PMID:29527092

  9. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    Science.gov (United States)

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Modeling and Control of Adjustable Articulated Parallel Compliant Actuation Arrangements in Articulated Robots

    Directory of Open Access Journals (Sweden)

    Wesley Roozing

    2018-02-01

    Full Text Available Considerable advances in robotic actuation technology have been made in recent years. Particularly the use of compliance has increased, both as series elastic elements as well as in parallel to the main actuation drives. This work focuses on the model formulation and control of compliant actuation structures including multiple branches and multiarticulation, and significantly contributes by proposing an elegant modular formulation that describes the energy exchange between the compliant elements and articulated multibody robot dynamics using the concept of power flows, and a single matrix that describes the entire actuation topology. Using this formulation, a novel gradient descent based control law is derived for torque control of compliant actuation structures with adjustable pretension, with proven convexity for arbitrary actuation topologies. Extensions toward handling unidirectionality of elastic elements and joint motion compensation are also presented. A simulation study is performed on a 3-DoF leg model, where series-elastic main drives are augmented by parallel elastic tendons with adjustable pretension. Two actuation topologies are considered, one of which includes a biarticulated tendon. The data demonstrate the effectiveness of the proposed modeling and control methods. Furthermore, it is shown the biarticulated topology provides significant benefits over the monoarticulated arrangement.

  11. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    Science.gov (United States)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  12. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    Science.gov (United States)

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  13. Linkage of PRA models. Phase 1, Results

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.

  14. Parameters Design for a Parallel Hybrid Electric Bus Using Regenerative Brake Model

    Directory of Open Access Journals (Sweden)

    Zilin Ma

    2014-01-01

    Full Text Available A design methodology which uses the regenerative brake model is introduced to determine the major system parameters of a parallel electric hybrid bus drive train. Hybrid system parameters mainly include the power rating of internal combustion engine (ICE, gear ratios of transmission, power rating, and maximal torque of motor, power, and capacity of battery. The regenerative model is built in the vehicle model to estimate the regenerative energy in the real road conditions. The design target is to ensure that the vehicle meets the specified vehicle performance, such as speed and acceleration, and at the same time, operates the ICE within an expected speed range. Several pairs of parameters are selected from the result analysis, and the fuel saving result in the road test shows that a 25% reduction is achieved in fuel consumption.

  15. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  16. A parallel process growth model of avoidant personality disorder symptoms and personality traits.

    Science.gov (United States)

    Wright, Aidan G C; Pincus, Aaron L; Lenzenweger, Mark F

    2013-07-01

    Avoidant personality disorder (AVPD), like other personality disorders, has historically been construed as a highly stable disorder. However, results from a number of longitudinal studies have found that the symptoms of AVPD demonstrate marked change over time. Little is known about which other psychological systems are related to this change. Although cross-sectional research suggests a strong relationship between AVPD and personality traits, no work has examined the relationship of their change trajectories. The current study sought to establish the longitudinal relationship between AVPD and basic personality traits using parallel process growth curve modeling. Parallel process growth curve modeling was applied to the trajectories of AVPD and basic personality traits from the Longitudinal Study of Personality Disorders (Lenzenweger, M. F., 2006, The longitudinal study of personality disorders: History, design considerations, and initial findings. Journal of Personality Disorders, 20, 645-670. doi:10.1521/pedi.2006.20.6.645), a naturalistic, prospective, multiwave, longitudinal study of personality disorder, temperament, and normal personality. The focus of these analyses is on the relationship between the rates of change in both AVPD symptoms and basic personality traits. AVPD symptom trajectories demonstrated significant negative relationships with the trajectories of interpersonal dominance and affiliation, and a significant positive relationship to rates of change in neuroticism. These results provide some of the first compelling evidence that trajectories of change in PD symptoms and personality traits are linked. These results have important implications for the ways in which temporal stability is conceptualized in AVPD specifically, and PD in general.

  17. A Parallel Process Growth Model of Avoidant Personality Disorder Symptoms and Personality Traits

    Science.gov (United States)

    Wright, Aidan G. C.; Pincus, Aaron L.; Lenzenweger, Mark F.

    2012-01-01

    Background Avoidant personality disorder (AVPD), like other personality disorders, has historically been construed as a highly stable disorder. However, results from a number of longitudinal studies have found that the symptoms of AVPD demonstrate marked change over time. Little is known about which other psychological systems are related to this change. Although cross-sectional research suggests a strong relationship between AVPD and personality traits, no work has examined the relationship of their change trajectories. The current study sought to establish the longitudinal relationship between AVPD and basic personality traits using parallel process growth curve modeling. Methods Parallel process growth curve modeling was applied to the trajectories of AVPD and basic personality traits from the Longitudinal Study of Personality Disorders (Lenzenweger, 2006), a naturalistic, prospective, multiwave, longitudinal study of personality disorder, temperament, and normal personality. The focus of these analyses is on the relationship between the rates of change in both AVPD symptoms and basic personality traits. Results AVPD symptom trajectories demonstrated significant negative relationships with the trajectories of interpersonal dominance and affiliation, and a significant positive relationship to rates of change in neuroticism. Conclusions These results provide some of the first compelling evidence that trajectories of change in PD symptoms and personality traits are linked. These results have important implications for the ways in which temporal stability is conceptualized in AVPD specifically, and PD in general. PMID:22506627

  18. Parallel Execution of Functional Mock-up Units in Buildings Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-30

    A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported as a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.

  19. Pangolin v1.0, a conservative 2-D advection model towards large-scale parallel calculation

    Directory of Open Access Journals (Sweden)

    A. Praga

    2015-02-01

    Full Text Available To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric advection model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach for advection was chosen to ensure mass preservation and to ease parallelization. To overcome the pole restriction on time steps for a regular latitude–longitude grid, Pangolin uses a quasi-area-preserving reduced latitude–longitude grid. The features of the regular grid are exploited to reduce the memory footprint and enable effective parallel performances. In addition, a custom domain decomposition algorithm is presented. To assess the validity of the advection scheme, its results are compared with state-of-the-art models on algebraic test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred cores.

  20. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  1. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  2. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  3. Influence of heterogeneity on rock strength and stiffness using discrete element method and parallel bond model

    Directory of Open Access Journals (Sweden)

    Spyridon Liakas

    2017-08-01

    Full Text Available The particulate discrete element method (DEM can be employed to capture the response of rock, provided that appropriate bonding models are used to cement the particles to each other. Simulations of laboratory tests are important to establish the extent to which those models can capture realistic rock behaviors. Hitherto the focus in such comparison studies has either been on homogeneous specimens or use of two-dimensional (2D models. In situ rock formations are often heterogeneous, thus exploring the ability of this type of models to capture heterogeneous material behavior is important to facilitate their use in design analysis. In situ stress states are basically three-dimensional (3D, and therefore it is important to develop 3D models for this purpose. This paper revisits an earlier experimental study on heterogeneous specimens, of which the relative proportions of weaker material (siltstone and stronger, harder material (sandstone were varied in a controlled manner. Using a 3D DEM model with the parallel bond model, virtual heterogeneous specimens were created. The overall responses in terms of variations in strength and stiffness with different percentages of weaker material (siltstone were shown to agree with the experimental observations. There was also a good qualitative agreement in the failure patterns observed in the experiments and the simulations, suggesting that the DEM data enabled analysis of the initiation of localizations and micro fractures in the specimens.

  4. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  5. Thermodynamics of the one-dimensional parallel Kawasaki model: Exact solution and mean-field approximations

    Science.gov (United States)

    Pazzona, Federico G.; Demontis, Pierfranco; Suffritti, Giuseppe B.

    2014-08-01

    The adsorption isotherm for the recently proposed parallel Kawasaki (PK) lattice-gas model [Phys. Rev. E 88, 062144 (2013), 10.1103/PhysRevE.88.062144] is calculated exactly in one dimension. To do so, a third-order difference equation for the grand-canonical partition function is derived and solved analytically. In the present version of the PK model, the attraction and repulsion effects between two neighboring particles and between a particle and a neighboring empty site are ruled, respectively, by the dimensionless parameters ϕ and θ. We discuss the inflections induced in the isotherms by situations of high repulsion, the role played by finite lattice sizes in the emergence of substeps, and the adequacy of the two most widely used mean-field approximations in lattice gases, namely, the Bragg-Williams and the Bethe-Peierls approximations.

  6. Counseling techniques to address male communication characteristics: an application of the extended parallel process model.

    Science.gov (United States)

    Patel, Puja S; Barnett, Candace W

    2011-08-01

    Evidence shows that the male ideology has a significant impact on men's health status. Men who adhere to the traditional masculine ideology may find messages regarding healthcare to be threatening. Pharmacists can use the Extended Parallel Process (EPP) Model to counsel men in a manner that reduces their feelings of fear and danger regarding their health while controlling feelings of vulnerability and susceptibility. When counseling men using the EPP Model, pharmacists are encouraged to use universal statements and open-ended questions to create patient awareness of the disease state and foster discussion. Furthermore, since men engage in limited nonverbal communication, pharmacists need to be direct and ask for feedback to gauge the patient's understanding of the counseling.

  7. A self-calibrating robot based upon a virtual machine model of parallel kinematics

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard

    2016-01-01

    a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows......A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... it to recognise its own geometry by probing the vertical offset from tool point to the machine table, at positions in the horizontal plane. After automatic calibration the positioning error of the machine tool was reduced from an initial error after its assembly of ±170 µm to a calibrated error of ±3 µm...

  8. RCS estimation of linear and planar dipole phased arrays approximate model

    CERN Document Server

    Singh, Hema; Jha, Rakesh Mohan

    2016-01-01

    In this book, the RCS of a parallel-fed linear and planar dipole array is derived using an approximate method. The signal propagation within the phased array system determines the radar cross section (RCS) of phased array. The reflection and transmission coefficients for a signal at different levels of the phased-in scattering array system depend on the impedance mismatch and the design parameters. Moreover the mutual coupling effect in between the antenna elements is an important factor. A phased array system comprises of radiating elements followed by phase shifters, couplers, and terminating load impedance. These components lead to respective impedances towards the incoming signal that travels through them before reaching receive port of the array system. In this book, the RCS is approximated in terms of array factor, neglecting the phase terms. The mutual coupling effect is taken into account. The dependence of the RCS pattern on the design parameters is analyzed. The approximate model is established as a...

  9. Problem gambling symptomatology and alcohol misuse among adolescents: A parallel-process latent growth curve model.

    Science.gov (United States)

    Mutti-Packer, Seema; Hodgins, David C; El-Guebaly, Nady; Casey, David M; Currie, Shawn R; Williams, Robert J; Smith, Garry J; Schopflocher, Don P

    2017-06-01

    The objective of the current study was to examine the possible temporal associations between alcohol misuse and problem gambling symptomatology from adolescence through to young adulthood. Parallel-process latent growth curve modeling was used to examine the trajectories of alcohol misuse and symptoms of problem gambling over time. Data were from a sample of adolescents recruited for the Leisure, Lifestyle, and Lifecycle Project in Alberta, Canada (n = 436), which included 4 assessments over 5 years. There was an average decline in problem gambling symptoms followed by an accelerating upward trend as the sample reached the legal age to gamble. There was significant variation in the rate of change in problem gambling symptoms over time; not all respondents followed the same trajectory. There was an average increase in alcohol misuse over time, with significant variability in baseline levels of use and the rate of change over time. The unconditional parallel process model indicated that higher baseline levels of alcohol misuse were associated with higher baseline levels of problem gambling symptoms. In addition, higher baseline levels of alcohol misuse were associated with steeper declines in problem gambling symptoms over time. However, these between-process correlations did not retain significance when covariates were added to the model, indicating that one behavior was not a risk factor for the other. The lack of mutual influence in the problem gambling symptomatology and alcohol misuse processes suggest that there are common risk factors underlying these two behaviors, supporting the notion of a syndrome model of addiction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Network-based Parallel Retrieval Onboard Computing Environment for Sensor Systems Deployed on NASA Unmanned Aircraft Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Remote Sensing Solutions proposes to develop the Network-based Parallel Retrieval Onboard Computing Environment for Sensor Systems (nPROCESS) for deployment on...

  11. Mathematical modeling of phase interaction taking place in materials processing

    International Nuclear Information System (INIS)

    Zinigrad, M.

    2002-01-01

    The quality of metallic products depends on their composition and structure. The composition and the structure are determined by various physico-chemical and technological factors. One of the most important and complicated problems in the modern industry is to obtain materials with required composition, structure and properties. For example, deep refining is a difficult task by itself, but the problem of obtaining the material with the required specific level of refining is much more complicated. It will take a lot of time and will require a lot of expanses to solve this problem empirically and the result will be far from the optimal solution. The most effective way to solve such problems is to carry out research in two parallel direction. Comprehensive analysis of thermodynamics, kinetics and mechanisms of the processes taking place at solid-liquid-gaseous phase interface and building of the clear well-based physico-chemical model of the above processes taking into account their interaction. Development of mathematical models of the specific technologies which would allow to optimize technological processes and to ensure obtaining of the required properties of the products by choosing the optimal composition of the raw materials. We apply the above unique methods. We developed unique methods of mathematical modeling of phase interaction at high temperatures. These methods allows us to build models taking into account: thermodynamic characteristics of the processes, influence of the initial composition and temperature on the equilibrium state of the reactions, kinetics of homogeneous and heterogeneous processes, influence of the temperature, composition, speed of the gas flows, hydrodynamic and thermal factors on the velocity of the chemical and diffusion processes. The models can be implemented in optimization of various metallurgical processes in manufacturing of steels and non-ferrous alloys as well as in materials refining, alloying with special additives

  12. Modeling and Grid impedance Variation Analysis of Parallel Connected Grid Connected Inverter based on Impedance Based Harmonic Analysis

    DEFF Research Database (Denmark)

    Kwon, JunBum; Wang, Xiongfei; Bak, Claus Leth

    2014-01-01

    This paper addresses the harmonic compensation error problem existing with parallel connected inverter in the same grid interface conditions by means of impedance-based analysis and modeling. Unlike the single grid connected inverter, it is found that multiple parallel connected inverters and grid...... impedance can make influence to each other if they each have a harmonic compensation function. The analysis method proposed in this paper is based on the relationship between the overall output impedance and input impedance of parallel connected inverter, where controller gain design method, which can...

  13. Element-by-element parallel spectral-element methods for 3-D teleseismic wave modeling

    KAUST Repository

    Liu, Shaolin

    2017-09-28

    The development of an efficient algorithm for teleseismic wave field modeling is valuable for calculating the gradients of the misfit function (termed misfit gradients) or Fréchet derivatives when the teleseismic waveform is used for adjoint tomography. Here, we introduce an element-by-element parallel spectral-element method (EBE-SEM) for the efficient modeling of teleseismic wave field propagation in a reduced geology model. Under the plane-wave assumption, the frequency-wavenumber (FK) technique is implemented to compute the boundary wave field used to construct the boundary condition of the teleseismic wave incidence. To reduce the memory required for the storage of the boundary wave field for the incidence boundary condition, a strategy is introduced to efficiently store the boundary wave field on the model boundary. The perfectly matched layers absorbing boundary condition (PML ABC) is formulated using the EBE-SEM to absorb the scattered wave field from the model interior. The misfit gradient can easily be constructed in each time step during the calculation of the adjoint wave field. Three synthetic examples demonstrate the validity of the EBE-SEM for use in teleseismic wave field modeling and the misfit gradient calculation.

  14. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  15. Parallel Simulation of Population Balance Model-Based Particulate Processes Using Multicore CPUs and GPUs

    Directory of Open Access Journals (Sweden)

    Anuj V. Prakash

    2013-01-01

    Full Text Available Computer-aided modeling and simulation are a crucial step in developing, integrating, and optimizing unit operations and subsequently the entire processes in the chemical/pharmaceutical industry. This study details two methods of reducing the computational time to solve complex process models, namely, the population balance model which given the source terms can be very computationally intensive. Population balance models are also widely used to describe the time evolutions and distributions of many particulate processes, and its efficient and quick simulation would be very beneficial. The first method illustrates utilization of MATLAB's Parallel Computing Toolbox (PCT and the second method makes use of another toolbox, JACKET, to speed up computations on the CPU and GPU, respectively. Results indicate significant reduction in computational time for the same accuracy using multicore CPUs. Many-core platforms such as GPUs are also promising towards computational time reduction for larger problems despite the limitations of lower clock speed and device memory. This lends credence to the use of highfidelity models (in place of reduced order models for control and optimization of particulate processes.

  16. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    International Nuclear Information System (INIS)

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai; Ng, Cho-Kuen; Rivetta, Claudio

    2017-01-01

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we present the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. Furthermore, the simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.

  17. Coupled Model of channels in parallel and neutron kinetics in two dimensions

    International Nuclear Information System (INIS)

    Cecenas F, M.; Campos G, R.M.; Valle G, E. del

    2004-01-01

    In this work an arrangement of thermohydraulic channels is presented that represent those four quadrants of a nucleus of reactor type BWR. The channels are coupled to a model of neutronic in two dimensions that allow to generate the radial profile of power of the reactor. Nevertheless that the neutronic pattern is of two dimensions, it is supplemented with axial additional information when considering the axial profiles of power for each thermo hydraulic channel. The stationary state is obtained the one it imposes as frontier condition the same pressure drop for all the channels. This condition is satisfied to iterating on the flow of coolant in each channel to equal the pressure drop in all the channels. This stationary state is perturbed later on when modifying the values for the effective sections corresponding to an it assembles. The calculation in parallel of the neutronic and the thermo hydraulic is carried out with Vpm (Virtual parallel machine) by means of an outline teacher-slave in a local net of computers. (Author)

  18. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  19. The Medial Temporal Lobe – Conduit of Parallel Connectivity: A model for Attention, Memory, and Perception.

    Directory of Open Access Journals (Sweden)

    Brian B. Mozaffari

    2014-11-01

    Full Text Available Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL – located deep in the hierarchy – serves as a bridge connecting supra to infra – MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL ‘bridge’ allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these ‘bridge’ predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC. In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.

  20. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    Science.gov (United States)

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai; Ng, Cho-Kuen; Rivetta, Claudio

    2017-10-01

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we present the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. The simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.

  1. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    Directory of Open Access Journals (Sweden)

    Oleksiy Kononenko

    2017-10-01

    Full Text Available Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we present the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. The simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.

  2. Advanced parallel computing for the coupled PCR-GLOBWB-MODFLOW model

    Science.gov (United States)

    Verkaik, Jarno; Schmitz, Oliver; Sutanudjaja, Edwin

    2017-04-01

    PCR-GLOBWB (https://github.com/UU-Hydro/PCR-GLOBWB_model) is a large-scale hydrological model intended for global to regional studies and developed at the Department of Physical Geography, Utrecht University (Netherlands). The latest version of the model can simulate terrestrial hydrological and water resource fluxes and storages with a typical spatial resolution of 5 arc-minutes (less than 10 km) at the global extent. One of the recent features in the model development is the inclusion of a global 2-layer MODFLOW model simulating groundwater lateral flow. This advanced feature enables us to simulate and assess the groundwater head dynamics at the global extent, including at regions with declining groundwater head problems. Unfortunately, the current coupled PCR-GLOBWB-MODFLOW requires long run times mainly attributed to the current inefficient parallel computing and coupling algorithm. In this work, we aim to improve it by setting-up a favorable river-basin partitioning manner that reduces I/O communication and optimizes load balance between PCR-GLOBWB and MODFLOW. We also aim to replace the MODFLOW-2000 in the current coupled model with MODFLOW-USG. This will allow us to use the new Parallel Krylov Solver (PKS) that can run with Message Passing Interface (MPI) and can be easily combined with Open Multi-Processing (OpenMP). The latest scaling test carried out on the Cartesius Dutch National supercomputer shows that the usage of MODFLOW-USG and new PKS solver can result in significant MODFLOW calculation speedups (up to 45). The encouraging result of this work opens a possibility for running the model with more detailed setup and at higher resolution. As MODFLOW-USG supports both structured and unstructured grids, this includes an opportunity to have a next generation of PCR-GLOBWB-MODFLOW model that has flexibility in grid design for its groundwater flow simulation (e.g. grid design can be used to focus along rivers and around wells, to discretize individual

  3. Modelling Detailed-Chemistry Effects on Turbulent Diffusion Flames using a Parallel Solution-Adaptive Scheme

    Science.gov (United States)

    Jha, Pradeep Kumar

    Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow

  4. Solution-phase parallel synthesis of aryloxyimino amides via a novel multicomponent reaction among aromatic (Z)-chlorooximes, isocyanides, and electron-deficient phenols.

    Science.gov (United States)

    Mercalli, Valentina; Giustiniano, Mariateresa; Del Grosso, Erika; Varese, Monica; Cassese, Hilde; Massarotti, Alberto; Novellino, Ettore; Tron, Gian Cesare

    2014-11-10

    A library of 41 aryloxyimino amides was prepared via solution phase parallel synthesis by extending the multicomponent reaction of (Z)-chlorooximes and isocyanides to the use of electron-deficient phenols. The resulting aryloxyiminoamide derivatives can be used as intermediates for the synthesis of benzo[d]isoxazole-3-carboxamides, dramatically reducing the number of synthetic steps required by other methods reported in literature.

  5. Stem thrust prediction model for W-K-M double wedge parallel expanding gate valves

    International Nuclear Information System (INIS)

    Eldiwany, B.; Alvarez, P.D.; Wolfe, K.

    1996-01-01

    An analytical model for determining the required valve stem thrust during opening and closing strokes of W-K-M parallel expanding gate valves was developed as part of the EPRI Motor-Operated Valve Performance Prediction Methodology (EPRI MOV PPM) Program. The model was validated against measured stem thrust data obtained from in-situ testing of three W-K-M valves. Model predictions show favorable, bounding agreement with the measured data for valves with Stellite 6 hardfacing on the disks and seat rings for water flow in the preferred flow direction (gate downstream). The maximum required thrust to open and to close the valve (excluding wedging and unwedging forces) occurs at a slightly open position and not at the fully closed position. In the nonpreferred flow direction, the model shows that premature wedging can occur during ΔP closure strokes even when the coefficients of friction at different sliding surfaces are within the typical range. This paper summarizes the model description and comparison against test data

  6. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    Science.gov (United States)

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Stem thrust prediction model for W-K-M double wedge parallel expanding gate valves

    Energy Technology Data Exchange (ETDEWEB)

    Eldiwany, B.; Alvarez, P.D. [Kalsi Engineering Inc., Sugar Land, TX (United States); Wolfe, K. [Electric Power Research Institute, Palo Alto, CA (United States)

    1996-12-01

    An analytical model for determining the required valve stem thrust during opening and closing strokes of W-K-M parallel expanding gate valves was developed as part of the EPRI Motor-Operated Valve Performance Prediction Methodology (EPRI MOV PPM) Program. The model was validated against measured stem thrust data obtained from in-situ testing of three W-K-M valves. Model predictions show favorable, bounding agreement with the measured data for valves with Stellite 6 hardfacing on the disks and seat rings for water flow in the preferred flow direction (gate downstream). The maximum required thrust to open and to close the valve (excluding wedging and unwedging forces) occurs at a slightly open position and not at the fully closed position. In the nonpreferred flow direction, the model shows that premature wedging can occur during {Delta}P closure strokes even when the coefficients of friction at different sliding surfaces are within the typical range. This paper summarizes the model description and comparison against test data.

  8. Abstract machine based execution model for computer architecture design and efficient implementation of logic programs in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Hermenegildo, M.V.

    1986-01-01

    The term Logic Programming refers to a variety of computer languages and execution models based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in artificial intelligence, knowledge-based systems, and many other areas of computing. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an Abstract Machine level, suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and, therefore, the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set.

  9. A Parallelized Pumpless Artificial Placenta System Significantly Prolonged Survival Time in a Preterm Lamb Model.

    Science.gov (United States)

    Miura, Yuichiro; Matsuda, Tadashi; Usuda, Haruo; Watanabe, Shimpei; Kitanishi, Ryuta; Saito, Masatoshi; Hanita, Takushi; Kobayashi, Yoshiyasu

    2016-05-01

    An artificial placenta (AP) is an arterio-venous extracorporeal life support system that is connected to the fetal circulation via the umbilical vasculature. Previously, we published an article describing a pumpless AP system with a small priming volume. We subsequently developed a parallelized system, hypothesizing that the reduced circuit resistance conveyed by this modification would enable healthy fetal survival time to be prolonged. We conducted experiments using a premature lamb model to test this hypothesis. As a result, the fetal survival period was significantly prolonged (60.4 ± 3.8 vs. 18.2 ± 3.2 h, P lamb fetuses to survive for a significantly longer period when compared with previous studies. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals Inc.

  10. Bentonite electrical conductivity: a model based on series–parallel transport

    KAUST Repository

    Lima, Ana T.

    2010-01-30

    Bentonite has significant applications nowadays, among them as landfill liners, in concrete industry as a repairing material, and as drilling mud in oil well construction. The application of an electric field to such perimeters is under wide discussion, and subject of many studies. However, to understand the behaviour of such an expansive and plastic material under the influence of an electric field, the perception of its electrical properties is essential. This work serves to compare existing data of such electrical behaviour with new laboratorial results. Electrical conductivity is a pertinent parameter since it indicates how much a material is prone to conduct electricity. In the current study, total conductivity of a compacted porous medium was established to be dependent upon density of the bentonite plug. Therefore, surface conductivity was addressed and a series-parallel transport model used to quantify/predict the total conductivity of the system. © The Author(s) 2010.

  11. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    Science.gov (United States)

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  12. Thoracic impedance change equation deduced on the basis of parallel impedance model and Ohm's law.

    Science.gov (United States)

    Qiu-Jin, Xiao; Zhen, Wang; Ming-Xing, Kuang; Ping, Wen; Pei, Liu; Jian-Feng, Ji

    2012-02-01

    The aim of the present study is to investigate an impedance change equation suited with the measurement of the impedance cardiograph (ICG). Based on a parallel impedance model and Ohm's law, an impedance change equation differed from Nyboer's equation is deduced. It is verified with the experiments of the impedance cardiography in 100 healthy adults. This equation shows that the thoracic impedance change (ΔZ) is directly proportional to the value of the volume change (ΔV) of the blood vessel, to the ratio of the basic impedance to the body height (Z(0)/H), while it is inversely proportional to the square of the chest circumference (C(t) (2)). These are supported by the experimental results in the measurement of the ICG. The equation proposed in the present paper is coincident with the actual condition in the measurement of the ICG.

  13. Design of language models at various phases of Tamil speech ...

    African Journals Online (AJOL)

    This paper describes the use of language models in various phases of Tamil speech recognition system for improving its performance. In this work, the language models are applied at various levels of speech recognition such as segmentation phase, recognition phase and the syllable and word level error correction phase.

  14. Use of the extended parallel processing model to evaluate culturally relevant kernicterus messages.

    Science.gov (United States)

    Russell, Jessica C; Smith, Sandi; Novales, Wilma; Massi Lindsey, Lisa L; Hanson, Joseph

    2013-01-01

    Kernicterus is a serious but easily preventable disease in newborns that is not well-known even by some health care professionals. This study evaluated a parent guide and poster on kernicterus awareness and prevention generated by the Centers for Disease Control and Prevention. The extended parallel processing model was used as a framework for creating the interview protocol and analyzing the results. In-depth interviews were conducted with four parents and six health care personnel of different ethnicities to evaluate the materials. Content for the parent guide and poster was held constant, but photos were varied according to the ethnicity of the baby (white, African American, or Hispanic) and the language in which the interviews were conducted (English and Spanish). The parent guide was evaluated positively, but reactions to the poster were varied. The consensus was that the poster drew more attention than the pocket guide but lacked sufficient information about what jaundice is or how to treat it, while the pocket guide provided information, especially with regard to efficacy. The extended parallel processing model claims that when efficacy is equal to or higher than perceived threat, respondents should engage in recommended responses, which was the general finding from these interviews. Recommendations for improvements of the materials are presented. The focus on different ethnicities in the materials was perceived as unnecessary and potentially counter-productive. Both parents and health care professionals mentioned the lack of information regarding treatment. Providing information on the length and effectiveness of treatment for jaundice and kernicterus might increase efficacy in averting the threat in both conditions. Copyright © 2013 National Association of Pediatric Nurse Practitioners. Published by Mosby, Inc. All rights reserved.

  15. Dynamic modeling and experiment of a new type of parallel servo press considering gravity counterbalance

    Science.gov (United States)

    He, Jun; Gao, Feng; Bai, Yongjun; Wu, Shengfu

    2013-11-01

    The large capacity servo press is traditionally realized by means of redundant actuation, however there exist the over-constraint problem and interference among actuators, which increases the control difficulty and the product cost. A new type of press mechanism with parallel topology is presented to develop the mechanical servo press with high stamping capacity. The dynamic model considering gravity counterbalance is proposed based on the virtual work principle, and then the effect of counterbalance cylinder on the dynamic performance of the servo press is studied. It is found that the motor torque required to operate the press is a lot less than the others when the ratio of the counterbalance force to the gravity of ram is in the vicinity of 1.0. The stamping force of the real press prototype can reach up to 25 MN on the position of 13 mm away from the bottom dead center. The typical deep-drawing process with 1 200 mm stroke at 8 strokes per minute is proposed by means of five order polynomial. On this process condition, the driving torques are calculated based on the above dynamic model and the torque measuring test is also carried out on the prototype. It is shown that the curve trend of calculation torque is consistent to the measured result and that the average error is less than 15%. The parallel mechanism is introduced into the development of large capacity servo press to avoid the over-constraint and interference of traditional redundant actuation, and its dynamic characteristics with gravity counterbalance are presented.

  16. Modelling, Simulation and Testing of a Reconfigurable Cable-Based Parallel Manipulator as Motion Aiding System

    Directory of Open Access Journals (Sweden)

    Gianni Castelli

    2010-01-01

    Full Text Available This paper presents results on the modelling, simulation and experimental tests of a cable-based parallel manipulator to be used as an aiding or guiding system for people with motion disabilities. There is a high level of motivation for people with a motion disability or the elderly to perform basic daily-living activities independently. Therefore, it is of great interest to design and implement safe and reliable motion assisting and guiding devices that are able to help end-users. In general, a robot for a medical application should be able to interact with a patient in safety conditions, i.e. it must not damage people or surroundings; it must be designed to guarantee high accuracy and low acceleration during the operation. Furthermore, it should not be too bulky and it should exert limited wrenches after close interaction with people. It can be advisable to have a portable system which can be easily brought into and assembled in a hospital or a domestic environment. Cable-based robotic structures can fulfil those requirements because of their main characteristics that make them light and intrinsically safe. In this paper, a reconfigurable four-cable-based parallel manipulator has been proposed as a motion assisting and guiding device to help people to accomplish a number of tasks, such as an aiding or guiding system to move the upper and lower limbs or the whole body. Modelling and simulation are presented in the ADAMS environment. Moreover, experimental tests are reported as based on an available laboratory prototype.

  17. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    Science.gov (United States)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  18. Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models.

    Science.gov (United States)

    Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica

    2017-06-29

    The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store

  19. Mathematical model of a parallel plate ammonia electrolyzer for combined wastewater remediation and hydrogen production.

    Science.gov (United States)

    Estejab, Ali; Daramola, Damilola A; Botte, Gerardine G

    2015-06-15

    A mathematical model was developed for the simulation of a parallel plate ammonia electrolyzer to convert ammonia in wastewater to nitrogen and hydrogen under basic conditions. The model consists of fundamental transport equations, the ammonia oxidation kinetics at the anode, and the hydrogen evolution kinetics at the cathode of the electrochemical reactor. The model shows both qualitative and quantitative agreement with experimental measurements at ammonia concentrations found within wastewater (200-1200 mg L(-1)). The optimum electrolyzer performance is dependent on both the applied voltage and the inlet concentrations. Maximum conversion of ammonia to nitrogen at the rates of 0.569 and 0.766 mg L(-1) min(-1) are achieved at low (0.01 M NH4Cl and 0.1 M KOH) and high (0.07 M NH4Cl and 0.15 M KOH) inlet concentrations, respectively. At high and low concentrations, an initial increase in the cell voltage will cause an increase in the system response - current density generated and ammonia converted. These system responses will approach a peak value before they start to decrease due to surface blockage and/or depletion of solvated species at the electrode surface. Furthermore, the model predicts that by increasing the reactant and electrolyte concentrations at a certain voltage, the peak current density will plateau, showing an asymptotic response. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Modeling cardiovascular hemodynamics using the lattice Boltzmann method on massively parallel supercomputers

    Science.gov (United States)

    Randles, Amanda Elizabeth

    the modeling of fluids in vessels with smaller diameters and a method for introducing the deformational forces exerted on the arterial flows from the movement of the heart by borrowing concepts from cosmodynamics are presented. These additional forces have a great impact on the endothelial shear stress. Third, the fluid model is extended to not only recover Navier-Stokes hydrodynamics, but also a wider range of Knudsen numbers, which is especially important in micro- and nano-scale flows. The tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of such higher-order models and enable efficient modeling of extreme regimes of computational fluid dynamics are discussed. Fourth, the extension of these models to other research questions like clogging in microfluidic devices and determining the severity of co-arctation of the aorta is presented. Through this work, a validation of these methods by taking real patient data and the measured pressure value before the narrowing of the aorta and predicting the pressure drop across the co-arctation is shown. Comparison with the measured pressure drop in vivo highlights the accuracy and potential impact of such patient specific simulations. Finally, a method to enable the simulation of longer trajectories in time by discretizing both spatially and temporally is presented. In this method, a serial coarse iterator is used to initialize data at discrete time steps for a fine model that runs in parallel. This coarse solver is based on a larger time step and typically a coarser discretization in space. Iterative refinement enables the compute-intensive fine iterator to be modeled with temporal parallelization. The algorithm consists of a series of prediction-corrector iterations completing when the results have converged within a certain tolerance. Combined, these developments allow large fluid models to be simulated for longer time durations

  1. Determining workspace parameters for a new type of 6RSS parallel manipulator based on structural and geometric models

    Directory of Open Access Journals (Sweden)

    Milica Lucian

    2017-01-01

    Full Text Available Workspace geometric modelling of a new type of 6RSS parallel manipulator is described below. In the beginning, the researches undertaken in this area by other authors are highlighted and then a definition of this type of mechanisms is provided. The structural model of the 6RSS manipulator is briefly described. Inverse geometric model and translation subspace methods are used in order to determine the dimensions that define the workspace volume of the parallel manipulator. The reachable workspace is defined as a subset of the whole workspace in relation with the positions achieved by the characteristic point.

  2. Modeling of arylamide helix mimetics in the p53 peptide binding site of hDM2 suggests parallel and anti-parallel conformations are both stable.

    Directory of Open Access Journals (Sweden)

    Jonathan C Fuller

    Full Text Available The design of novel α-helix mimetic inhibitors of protein-protein interactions is of interest to pharmaceuticals and chemical genetics researchers as these inhibitors provide a chemical scaffold presenting side chains in the same geometry as an α-helix. This conformational arrangement allows the design of high affinity inhibitors mimicking known peptide sequences binding specific protein substrates. We show that GAFF and AutoDock potentials do not properly capture the conformational preferences of α-helix mimetics based on arylamide oligomers and identify alternate parameters matching solution NMR data and suitable for molecular dynamics simulation of arylamide compounds. Results from both docking and molecular dynamics simulations are consistent with the arylamides binding in the p53 peptide binding pocket. Simulations of arylamides in the p53 binding pocket of hDM2 are consistent with binding, exhibiting similar structural dynamics in the pocket as simulations of known hDM2 binders Nutlin-2 and a benzodiazepinedione compound. Arylamide conformations converge towards the same region of the binding pocket on the 20 ns time scale, and most, though not all dihedrals in the binding pocket are well sampled on this timescale. We show that there are two putative classes of binding modes for arylamide compounds supported equally by the modeling evidence. In the first, the arylamide compound lies parallel to the observed p53 helix. In the second class, not previously identified or proposed, the arylamide compound lies anti-parallel to the p53 helix.

  3. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  4. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    Science.gov (United States)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  5. Parallel-Distributed Model Deformation in the Fingertips for Stable Grasping and Object Manipulation

    Directory of Open Access Journals (Sweden)

    R. García-Rodríguez

    2012-01-01

    Full Text Available The study on the human grip has inspired to the robotics over the past decades, which has resulted in performance improvements of robotic hands. However, current robotic hands do not have the enough dexterity to execute complex tasks. Recognizing this fact, the soft fingertips with hemispherical shape and deformation models have renewed attention of roboticists. A high-friction contact to prevent slipping and the rolling contribution between the object and fingers are some characteristics of the soft fingertips which are useful to improve the grasping stability. In this paper, the parallel distributed deformation model is used to present the dynamical model of the soft tip fingers with n-degrees of freedom. Based on the joint angular positions of the fingers, a control scheme that fuses a stable grasping and the object manipulation into a unique control signal is proposed. The force-closure conditions are defined to guarantee a stable grasping and the boundedness of the closed-loop signals is proved. Furthermore, the convergence of the contact force to its desired value is guaranteed, without any information about the radius of the fingertip. Simulation results are provided to visualize the stable grasping and the object manipulation, avoiding the gravity effect.

  6. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  7. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  8. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    Science.gov (United States)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  9. A New LCL -Filter With In-Series Parallel Resonant Circuit for Single-Phase Grid-Tied Inverter

    DEFF Research Database (Denmark)

    Wu, Weimin; Sun, Yunjie; Lin, Zhe

    2014-01-01

    are investigated for the conventional LCL-filter-based system. Based on this, a modified LCL-filter topology using an extra parallel LrCr resonant circuit is proposed to reduce the total inductance value, without increasing the capacitive reactive power. The validity is verified through the experiments on a 500-W...

  10. Three-dimensional electromagnetic modeling and inversion on massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Newman, G.A.; Alumbaugh, D.L. [Sandia National Labs., Albuquerque, NM (United States). Geophysics Dept.

    1996-03-01

    This report has demonstrated techniques that can be used to construct solutions to the 3-D electromagnetic inverse problem using full wave equation modeling. To this point great progress has been made in developing an inverse solution using the method of conjugate gradients which employs a 3-D finite difference solver to construct model sensitivities and predicted data. The forward modeling code has been developed to incorporate absorbing boundary conditions for high frequency solutions (radar), as well as complex electrical properties, including electrical conductivity, dielectric permittivity and magnetic permeability. In addition both forward and inverse codes have been ported to a massively parallel computer architecture which allows for more realistic solutions that can be achieved with serial machines. While the inversion code has been demonstrated on field data collected at the Richmond field site, techniques for appraising the quality of the reconstructions still need to be developed. Here it is suggested that rather than employing direct matrix inversion to construct the model covariance matrix which would be impossible because of the size of the problem, one can linearize about the 3-D model achieved in the inverse and use Monte-Carlo simulations to construct it. Using these appraisal and construction tools, it is now necessary to demonstrate 3-D inversion for a variety of EM data sets that span the frequency range from induction sounding to radar: below 100 kHz to 100 MHz. Appraised 3-D images of the earth`s electrical properties can provide researchers opportunities to infer the flow paths, flow rates and perhaps the chemistry of fluids in geologic mediums. It also offers a means to study the frequency dependence behavior of the properties in situ. This is of significant relevance to the Department of Energy, paramount to characterizing and monitoring of environmental waste sites and oil and gas exploration.

  11. Implementation of a Message Passing Interface into a Cloud-Resolving Model for Massively Parallel Computing

    Science.gov (United States)

    Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve

    2004-01-01

    The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.

  12. Quantitative modelling of the closure of meso-scale parallel currents in the nightside ionosphere

    Directory of Open Access Journals (Sweden)

    A. Marchaudon

    2004-01-01

    Full Text Available On 12 January 2000, during a northward IMF period, two successive conjunctions occur between the CUTLASS SuperDARN radar pair and the two satellites Ørsted and FAST. This situation is used to describe and model the electrodynamic of a nightside meso-scale arc associated with a convection shear. Three field-aligned current sheets, one upward and two downward on both sides, are observed. Based on the measurements of the parallel currents and either the conductance or the electric field profile, a model of the ionospheric current closure is developed along each satellite orbit. This model is one-dimensional, in a first attempt and a two-dimensional model is tested for the Ørsted case. These models allow one to quantify the balance between electric field gradients and ionospheric conductance gradients in the closure of the field-aligned currents. These radar and satellite data are also combined with images from Polar-UVI, allowing for a description of the time evolution of the arc between the two satellite passes. The arc is very dynamic, in spite of quiet solar wind conditions. Periodic enhancements of the convection and of electron precipitation associated with the arc are observed, probably associated with quasi-periodic injections of particles due to reconnection in the magnetotail. Also, a northward shift and a reorganisation of the precipitation pattern are observed, together with a southward shift of the convection shear. Key words. Ionosphere (auroral ionosphere; electric fields and currents; particle precipitation – Magnetospheric physics (magnetosphere-ionosphere interactions

  13. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  14. Moving in Parallel Toward a Modern Modeling Epistemology: Bayes Factors and Frequentist Modeling Methods.

    Science.gov (United States)

    Rodgers, Joseph Lee

    2016-01-01

    The Bayesian-frequentist debate typically portrays these statistical perspectives as opposing views. However, both Bayesian and frequentist statisticians have expanded their epistemological basis away from a singular focus on the null hypothesis, to a broader perspective involving the development and comparison of competing statistical/mathematical models. For frequentists, statistical developments such as structural equation modeling and multilevel modeling have facilitated this transition. For Bayesians, the Bayes factor has facilitated this transition. The Bayes factor is treated in articles within this issue of Multivariate Behavioral Research. The current presentation provides brief commentary on those articles and more extended discussion of the transition toward a modern modeling epistemology. In certain respects, Bayesians and frequentists share common goals.

  15. A Parallel Disintegrated Model for Uncertainty Analysis in Estimating Electrical Power Outage Areas

    Science.gov (United States)

    Omitaomu, O. A.

    2008-05-01

    extreme events may lead to model uncertainty, parameter uncertainty, and/or decision uncertainty. The type and source of uncertainty can dictate the methods for characterizing the uncertainty and its impact on effective disaster management strategies. Several techniques including sensitivity analysis, fuzzy sets theory, and Bayes' Theorem have been used for quantifying specific sources of uncertainty in various studies. However, these studies focus on individual areas of uncertainty and extreme weather. In this paper, we present some preliminary results in developing a parallel disintegrated model for uncertainty analysis with application to estimating electric power outage areas. The proposed model is disintegrated in the sense that each elements of the impacts assessment framework is assessed separately; and parallel since for each source of uncertainty a number of equivalent estimating models are implemented and evaluated. The objectives of the model include identifying the sources of uncertainty to be included in assessment model and determining the trade-offs in reducing the uncertainty due to major sources. The model would also be useful for uncertainty analysis of extreme weather impacts assessment to other critical infrastructures.

  16. Triple arterial phase MR imaging with gadoxetic acid using a combination of contrast enhanced time robust angiography, keyhole, and viewsharing techniques and two-dimensional parallel imaging in comparison with conventional single arterial phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee; Lee, Jeong Min; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Yu, Mi Hye [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul (Korea, Republic of)

    2016-07-15

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ2 test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  17. Triple Arterial Phase MR Imaging with Gadoxetic Acid Using a Combination of Contrast Enhanced Time Robust Angiography, Keyhole, and Viewsharing Techniques and Two-Dimensional Parallel Imaging in Comparison with Conventional Single Arterial Phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Lee, Jeong Min [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of); Yu, Mi Hye [Department of Radiology, Konkuk University Medical Center, Seoul 05030 (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul 04342 (Korea, Republic of); Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of)

    2016-11-01

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ{sup 2} test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  18. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  19. Parallelization experience with four canonical econometric models using ParMitISEM

    NARCIS (Netherlands)

    Baştürk, N.; Grassi, S.; Hoogerheide, L.; van Dijk, H.K.

    2016-01-01

    This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm, introduced by Hoogerheide et al. (2012), provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of

  20. Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM

    NARCIS (Netherlands)

    N. Basturk (Nalan); S. Grassi (Stefano); L.F. Hoogerheide (Lennart); H.K. van Dijk (Herman)

    2016-01-01

    textabstractThis paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm, introduced by Hoogerheide, Opschoor and Van Dijk (2012), provides an automatic and flexible method to approximate a non-elliptical target density

  1. Parallelization experience with four canonical econometric models using ParMitISEM

    NARCIS (Netherlands)

    Bastürk, Nalan; Grassi, S.; Hoogerheide, L.; van Dijk, Herman K.

    2016-01-01

    This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of

  2. Some approaches for modeling and analysis of a parallel mechanism with stewart platform architecture

    Energy Technology Data Exchange (ETDEWEB)

    V. De Sapio

    1998-05-01

    Parallel mechanisms represent a family of devices based on a closed kinematic architecture. This is in contrast to serial mechanisms, which are comprised of a chain-like series of joints and links in an open kinematic architecture. The closed architecture of parallel mechanisms offers certain benefits and disadvantages.

  3. Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing

    NARCIS (Netherlands)

    Hausmans, J.P.H.M.; Geuns, S.J.; Wiggers, M.H.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Real-time stream processing applications such as software defined radios are usually executed concurrently on multiprocessor systems. Exploiting coarse-grained data parallelism by duplicating tasks is often required, besides pipeline parallelism, to meet the temporal constraints of the applications.

  4. The Modeling and Harmonic Coupling Analysis of Multiple-Parallel Connected Inverter Using Harmonic State Space (HSS)

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth

    2015-01-01

    change compared to the conventional operation. In this paper, a Harmonic State Space modeling method, which is based on the Linear Time varying theory, is used to analyze different operating points of the parallel connected converters. The analyzed results show that the HSS modeling approach explicitly...... be difficult in terms of complex multi-parallel connected systems, especially in the case of renewable energy, where possibilities for intermittent operation due to the weather conditions exist. Hence, it can bring many different operating points to the power converter, and the impedance characteristics can...

  5. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  6. Presheath/sheath model with secondary electron emission from two parallel walls

    International Nuclear Information System (INIS)

    Ahedo, E.

    2002-01-01

    A macroscopic model of the interaction of a plasma with two parallel, electron-emitting walls is presented. Zero Debye-length and total thermalization of the secondary electron emission (SEE) are assumed. The SEE is treated as a free beam within each thin, collisionless sheath, but as part of a single electron population within the presheath. Plasma models with three and two species result in sheath and presheath, respectively. The ion flow at the presheath/sheath transition is sonic, and the sound speed there determines the relation between the temperature of the confined electron populations in sheath and presheath. For the general case of a plasma flowing axially between two annular walls the complete dimensionless solution depends on five parameters. Potential drops in the presheath can be larger than in the sheaths, mainly when charge-saturation is reached in the sheath or for a large effective ion friction in the presheath. The losses of plasma current to the walls are determined totally by the presheath problem, whereas the sheath problem and wall material determine the energy lost by impacting particle. Energy losses change drastically from zero SEE to a SEE yield about 100% when the charge-saturated regime is reached

  7. Oblique mid ocean ridge subduction modelling with the parallel fast multipole boundary element method

    Science.gov (United States)

    Quevedo, L.; Hansra, B.; Morra, G.; Butterworth, N.; Müller, R. D.

    2013-04-01

    Geodynamic models describe the thermo-mechanical evolution of rheologically intricate structures spanning different length scales, yet many of their most relevant dynamic features can be studied in terms of low Reynolds number multiphase creep flow of isoviscous and isopycnic structures. We use the BEM-E arth code to study the interaction of the lithosphere and mantle within the solid earth system in this approximation. BEM-E arth overcomes the limitations of traditional FD/FEM for this problem by considering only the dynamics of Boundary Integral Elements at fluid interfaces, and employing a parallel multipole solver accelerated with a hashed octtree. As an application example, we self-consistently model the processes controlling the subduction of an oblique mid-ocean ridge in a global 3D spherical setting in a variety of cases, and find a critical angle characterising the transition between an extensional strain regime related to tectonic plate necking and a compressive regime related to Earth curvature effects.

  8. A repeated measures model for analysis of continuous outcomes in sequential parallel comparison design studies.

    Science.gov (United States)

    Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio

    2013-07-20

    Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2016-12-01

    Full Text Available This paper deals with mixed model assembly line (MMAL balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II and multi-objective particle swarm optimization (MOPSO are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.

  10. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  11. The inaccuracy of conventional one-dimensional parallel thermal resistance circuit model for two-dimensional composite walls

    International Nuclear Information System (INIS)

    Wong, K.-L.; Hsien, T.-L.; Hsiao, M.-C.; Chen, W.-L.; Lin, K.-C.

    2008-01-01

    This investigation is to show that two-dimensional steady state heat transfer problems of composite walls should not be solved by the conventionally one-dimensional parallel thermal resistance circuits (PTRC) model because the interface temperatures are not unique. Thus PTRC model cannot be used like its conventional recognized analogy, parallel electrical resistance circuits (PERC) model which has the unique node electric voltage. Two typical composite wall examples, solved by CFD software, are used to demonstrate the incorrectness. The numerical results are compared with those obtained by PTRC model, and very large differences are observed between their results. This proves that the application of conventional heat transfer PTRC model to two-dimensional composite walls, introduced in most heat transfer text book, is totally incorrect. An alternative one-dimensional separately series thermal resistance circuit (SSTRC) model is proposed and applied to the two-dimensional composite walls with isothermal boundaries. Results with acceptable accuracy can be obtained by the new model

  12. Incorrectness of conventional one-dimensional parallel thermal resistance circuit model for two-dimensional circular composite pipes

    International Nuclear Information System (INIS)

    Wong, K.-L.; Hsien, T.-L.; Chen, W.-L.; Yu, S.-J.

    2008-01-01

    This study is to prove that two-dimensional steady state heat transfer problems of composite circular pipes cannot be appropriately solved by the conventional one-dimensional parallel thermal resistance circuits (PTRC) model because its interface temperatures are not unique. Thus, the PTRC model is definitely different from its conventional recognized analogy, parallel electrical resistance circuits (PERC) model, which has unique node electric voltages. Two typical composite circular pipe examples are solved by CFD software, and the numerical results are compared with those obtained by the PTRC model. This shows that the PTRC model generates large error. Thus, this conventional model, introduced in most heat transfer text books, cannot be applied to two-dimensional composite circular pipes. On the contrary, an alternative one-dimensional separately series thermal resistance circuit (SSTRC) model is proposed and applied to a two-dimensional composite circular pipe with isothermal boundaries, and acceptable results are returned

  13. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    Science.gov (United States)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  14. Models for assessing the relative phase velocity in a two-phase flow. Status report

    International Nuclear Information System (INIS)

    Schaffrath, A.; Ringel, H.

    2000-06-01

    The knowledge of slip or drift flux in two phase flow is necessary for several technical processes (e.g. two phase pressure losses, heat and mass transfer in steam generators and condensers, dwell period in chemical reactors, moderation effectiveness of two phase coolant in BWR). In the following the most important models for two phase flow with different phase velocities (e.g. slip or drift models, analogy between pressure loss and steam quality, ε - ε models and models for the calculation of void distribution in reposing fluids) are classified, described and worked up for a further comparison with own experimental data. (orig.)

  15. Kinetostatic modeling and analysis of an exechon parallel kinematic machine(PKM) module

    Science.gov (United States)

    Zhao, Yanqin; Jin, Yan; Zhang, Jun

    2016-01-01

    As a newly invented parallel kinematic machine(PKM), Exechon has found its potential application in machining and assembling industries due to high rigidity and high dynamics. To guarantee the overall performance, the loading conditions and deflections of the key components must be revealed to provide basic mechanic data for component design. For this purpose, a kinetostatic model is proposed with substructure synthesis technique. The Exechon is divided into a platform subsystem, a fixed base subsystem and three limb subsystems according to its structure. By modeling the limb assemblage as a spatial beam constrained by two sets of lumped virtual springs representing the compliances of revolute joint, universal joint and spherical joint, the equilibrium equations of limb subsystems are derived with finite element method(FEM). The equilibrium equations of the platform are derived with Newton's 2nd law. By introducing deformation compatibility conditions between the platform and limb, the governing equilibrium equations of the system are derived to formulate an analytical expression for system's deflections. The platform's elastic displacements and joint reactions caused by the gravity are investigated to show a strong position-dependency and axis-symmetry due to its kinematic and structure features. The proposed kinetostatic model is a trade-off between the accuracy of FEM and concision of analytical method, thus can predict the kinetostatics throughout the workspace in a quick and succinct manner. The proposed modeling methodology and kinetostatic analysis can be further expanded to other PKMs with necessary modifications, providing useful information for kinematic calibration as well as component strength calculations.

  16. Model-based phase-shifting interferometer

    Science.gov (United States)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  17. Speed Sensorless vector control of parallel-connected three-phase two-motor single-inverter drive system

    OpenAIRE

    Gunabalan, Ramachandiran; Sanjeevikumar, Padmanaban; Blaabjerg, Frede; Wheeler, Patrick; Ojo, Joseph Olorunfemi; Ertas, Ahmet H.

    2016-01-01

    This paper presents the characteristic behavior of direct vector control of two induction motors with sensorless speed feedback having the same rating parameters, paralleled combination, and supplied from a single current-controlled pulse-width-modulated voltage-source inverter drive. Natural observer design technique is known for its simple construction, which estimates the speed and rotor fluxes. Load torque is estimated by load torque adaptation and the average rotor flux was maintained co...

  18. PALM: a paralleled and integrated framework for phylogenetic inference with automatic likelihood model selectors.

    Directory of Open Access Journals (Sweden)

    Shu-Hwa Chen

    Full Text Available BACKGROUND: Selecting an appropriate substitution model and deriving a tree topology for a given sequence set are essential in phylogenetic analysis. However, such time consuming, computationally intensive tasks rely on knowledge of substitution model theories and related expertise to run through all possible combinations of several separate programs. To ensure a thorough and efficient analysis and avert tedious manipulations of various programs, this work presents an intuitive framework, the phylogenetic reconstruction with automatic likelihood model selectors (PALM, with convincing, updated algorithms and a best-fit model selection mechanism for seamless phylogenetic analysis. METHODOLOGY: As an integrated framework of ClustalW, PhyML, MODELTEST, ProtTest, and several in-house programs, PALM evaluates the fitness of 56 substitution models for nucleotide sequences and 112 substitution models for protein sequences with scores in various criteria. The input for PALM can be either sequences in FASTA format or a sequence alignment file in PHYLIP format. To accelerate the computing of maximum likelihood and bootstrapping, this work integrates MPICH2/PhyML, PalmMonitor and Palm job controller across several machines with multiple processors and adopts the task parallelism approach. Moreover, an intuitive and interactive web component, PalmTree, is developed for displaying and operating the output tree with options of tree rooting, branches swapping, viewing the branch length values, and viewing bootstrapping score, as well as removing nodes to restart analysis iteratively. SIGNIFICANCE: The workflow of PALM is straightforward and coherent. Via a succinct, user-friendly interface, researchers unfamiliar with phylogenetic analysis can easily use this server to submit sequences, retrieve the output, and re-submit a job based on a previous result if some sequences are to be deleted or added for phylogenetic reconstruction. PALM results in an inference of

  19. PALM: a paralleled and integrated framework for phylogenetic inference with automatic likelihood model selectors.

    Science.gov (United States)

    Chen, Shu-Hwa; Su, Sheng-Yao; Lo, Chen-Zen; Chen, Kuei-Hsien; Huang, Teng-Jay; Kuo, Bo-Han; Lin, Chung-Yen

    2009-12-07

    Selecting an appropriate substitution model and deriving a tree topology for a given sequence set are essential in phylogenetic analysis. However, such time consuming, computationally intensive tasks rely on knowledge of substitution model theories and related expertise to run through all possible combinations of several separate programs. To ensure a thorough and efficient analysis and avert tedious manipulations of various programs, this work presents an intuitive framework, the phylogenetic reconstruction with automatic likelihood model selectors (PALM), with convincing, updated algorithms and a best-fit model selection mechanism for seamless phylogenetic analysis. As an integrated framework of ClustalW, PhyML, MODELTEST, ProtTest, and several in-house programs, PALM evaluates the fitness of 56 substitution models for nucleotide sequences and 112 substitution models for protein sequences with scores in various criteria. The input for PALM can be either sequences in FASTA format or a sequence alignment file in PHYLIP format. To accelerate the computing of maximum likelihood and bootstrapping, this work integrates MPICH2/PhyML, PalmMonitor and Palm job controller across several machines with multiple processors and adopts the task parallelism approach. Moreover, an intuitive and interactive web component, PalmTree, is developed for displaying and operating the output tree with options of tree rooting, branches swapping, viewing the branch length values, and viewing bootstrapping score, as well as removing nodes to restart analysis iteratively. The workflow of PALM is straightforward and coherent. Via a succinct, user-friendly interface, researchers unfamiliar with phylogenetic analysis can easily use this server to submit sequences, retrieve the output, and re-submit a job based on a previous result if some sequences are to be deleted or added for phylogenetic reconstruction. PALM results in an inference of phylogenetic relationship not only by vanquishing the

  20. Linear and stable photonic radio frequency phase shifter based on a dual-parallel Mach-Zehnder modulator using a two-drive scheme.

    Science.gov (United States)

    Shen, Jianguo; Wu, Guiling; Zou, Weiwen; Chen, Ruihao; Chen, Jianping

    2013-12-01

    We theoretically and experimentally demonstrate a linear and stable photonic RF phase shifter based on a dual-parallel Mach-Zehnder modulator (DPMZM) using a two-drive scheme. To avoid the effect of the residual optical carrier and overcome the lowest frequency limit from the optical filter, a local microwave signal and a signal up-converted from the under-phase-shifted RF signal are applied to the two RF inputs of the DPMZM, respectively. A phase-shifted RF signal is generated by beating the two first-order upper sidebands located in the passband of the optical filter. A continuous and linear phase shift of more than 360° and power variation of less than ±0.15  dB at 1 GHz are achieved by simply tuning the bias voltage of the modulator. A phase tuning bandwidth of more than 17 MHz and phase drift of less than 0.5° within 2000 s are also observed.

  1. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    Science.gov (United States)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  2. A parallel Discrete Element Method to model collisions between non-convex particles

    Directory of Open Access Journals (Sweden)

    Rakotonirina Andriarimina Daniel

    2017-01-01

    Full Text Available In many dry granular and suspension flow configurations, particles can be highly non-spherical. It is now well established in the literature that particle shape affects the flow dynamics or the microstructure of the particles assembly in assorted ways as e.g. compacity of packed bed or heap, dilation under shear, resistance to shear, momentum transfer between translational and angular motions, ability to form arches and block the flow. In this talk, we suggest an accurate and efficient way to model collisions between particles of (almost arbitrary shape. For that purpose, we develop a Discrete Element Method (DEM combined with a soft particle contact model. The collision detection algorithm handles contacts between bodies of various shape and size. For nonconvex bodies, our strategy is based on decomposing a non-convex body into a set of convex ones. Therefore, our novel method can be called “glued-convex method” (in the sense clumping convex bodies together, as an extension of the popular “glued-spheres” method, and is implemented in our own granular dynamics code Grains3D. Since the whole problem is solved explicitly, our fully-MPI parallelized code Grains3D exhibits a very high scalability when dynamic load balancing is not required. In particular, simulations on up to a few thousands cores in configurations involving up to a few tens of millions of particles can readily be performed. We apply our enhanced numerical model to (i the collapse of a granular column made of convex particles and (i the microstructure of a heap of non-convex particles in a cylindrical reactor.

  3. Phase Equilibrium Modeling for Shale Production Simulation

    DEFF Research Database (Denmark)

    Sandoval Lemus, Diego Rolando

    calculation tools for phase equilibrium in porous media with capillary pressure and adsorption effects. Analysis using these tools have shown that capillary pressure and adsorption have non-negligible effects on phase equilibrium in shale. As general tools, they can be used to calculate phase equilibrium...... in other porous media as well. The compositional simulator with added capillary pressure effects on phase equilibrium can be used for evaluating the effects in dynamic and more complex scenarios....

  4. Numerical modelling of series-parallel cooling systems in power plant

    Science.gov (United States)

    Regucki, Paweł; Lewkowicz, Marek; Kucięba, Małgorzata

    2017-11-01

    The paper presents a mathematical model allowing one to study series-parallel hydraulic systems like, e.g., the cooling system of a power boiler's auxiliary devices or a closed cooling system including condensers and cooling towers. The analytical approach is based on a set of non-linear algebraic equations solved using numerical techniques. As a result of the iterative process, a set of volumetric flow rates of water through all the branches of the investigated hydraulic system is obtained. The calculations indicate the influence of changes in the pipeline's geometrical parameters on the total cooling water flow rate in the analysed installation. Such an approach makes it possible to analyse different variants of the modernization of the studied systems, as well as allowing for the indication of its critical elements. Basing on these results, an investor can choose the optimal variant of the reconstruction of the installation from the economic point of view. As examples of such a calculation, two hydraulic installations are described. One is a boiler auxiliary cooling installation including two screw ash coolers. The other is a closed cooling system consisting of cooling towers and condensers.

  5. Modeling and Control of a Parallel Waste Heat Recovery System for Euro-VI Heavy-Duty Diesel Engines

    NARCIS (Netherlands)

    Feru, E.; Willems, F.P.T.; Jager, B. de; Steinbuch, M.

    2014-01-01

    This paper presents the modeling and control of a waste heat recovery system for a Euro-VI heavy-duty truck engine. The considered waste heat recovery system consists of two parallel evaporators with expander and pumps mechanically coupled to the engine crankshaft. Compared to previous work, the

  6. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    Science.gov (United States)

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  7. Comparative Study of Message Passing and Shared Memory Parallel Programming Models in Neural Network Training

    Energy Technology Data Exchange (ETDEWEB)

    Vitela, J.; Gordillo, J.; Cortina, L; Hanebutte, U.

    1999-12-14

    It is presented a comparative performance study of a coarse grained parallel neural network training code, implemented in both OpenMP and MPI, standards for shared memory and message passing parallel programming environments, respectively. In addition, these versions of the parallel training code are compared to an implementation utilizing SHMEM the native SGI/CRAY environment for shared memory programming. The multiprocessor platform used is a SGI/Cray Origin 2000 with up to 32 processors. It is shown that in this study, the native CRAY environment outperforms MPI for the entire range of processors used, while OpenMP shows better performance than the other two environments when using more than 19 processors. In this study, the efficiency is always greater than 60% regardless of the parallel programming environment used as well as of the number of processors.

  8. A Pilot Study to Compare Programming Effort for Two Parallel Programming Models (PREPRINT)

    National Research Council Canada - National Science Library

    Hochstein, Lorin; Basili, Victor R; Vishkin, Uzi; Gilbert, John

    2007-01-01

    CONTEXT: Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance. OBJECTIVE...

  9. Analyzing Tropical Waves Using the Parallel Ensemble Empirical Model Decomposition Method: Preliminary Results from Hurricane Sandy

    Science.gov (United States)

    Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling

    2013-01-01

    In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.

  10. Exploring Parallel Algorithms for Volumetric Mass-Spring-Damper Models in CUDA

    DEFF Research Database (Denmark)

    Rasmusson, Allan; Mosegaard, Jesper; Sørensen, Thomas Sangild

    2008-01-01

    Since the advent of programmable graphics processors (GPUs) their computational powers have been utilized for general purpose computation. Initially by “exploiting” graphics APIs and recently through dedicated parallel computation frameworks such as the Compute Unified Device Architecture (CUDA...

  11. Mathematical models and simulations of phase noise in phase-locked loops

    Directory of Open Access Journals (Sweden)

    Sethapong Limkumnerd

    2007-07-01

    Full Text Available Phase noises in Phase-Locked Loops (PLLs are a key parameter for communication systems that contribute the bit-rate-error of communication systems and cause synchronization problems. Accurate predictions of phase noises through mathematical models are consequently desirable for practical designs of PLLs. Despite many phase noise models derived from noise sources from electronic devices such as an oscillator and a multiplier have been proposed, no phase noise models that include noises from loop filters have specifically been investigated. This paper therefore investigates the roles of loop filters in phase noise contribution. The major scopes of this paper is a detailed analysis and simulations of phase noise models resulting from all components. i.e. a voltage-controlled oscillator, a multiplier and a filter. Two particular second-order passive and active low-pass filters are compared. The results show that simulations of phase noises without an inclusion of filter noises may not be accurate because the filter noises, particularly the active filter, significantly contribute the total phase noise. Moreover, the passive filter does not significantly dominate the phase noise at low offset frequency while the active filters entirely dominate. Therefore, the passive filter is a more efficient filter for PLL circuit at low offset frequency. The phase noise models presented in this paper are relatively simple and can be used for accurate phase noise prediction for PLL designs.

  12. Prediction of Adequate Prenatal Care Utilization Based on the Extended Parallel Process Model.

    Science.gov (United States)

    Hajian, Sepideh; Imani, Fatemeh; Riazi, Hedyeh; Salmani, Fatemeh

    2017-10-01

    Pregnancy complications are one of the major public health concerns. One of the main causes of preventable complications is the absence of or inadequate provision of prenatal care. The present study was conducted to investigate whether Extended Parallel Process Model's constructs can predict the utilization of prenatal care services. The present longitudinal prospective study was conducted on 192 pregnant women selected through the multi-stage sampling of health facilities in Qeshm, Hormozgan province, from April to June 2015. Participants were followed up from the first half of pregnancy until their childbirth to assess adequate or inadequate/non-utilization of prenatal care services. Data were collected using the structured Risk Behavior Diagnosis Scale. The analysis of the data was carried out in SPSS-22 using one-way ANOVA, linear regression and logistic regression analysis. The level of significance was set at 0.05. Totally, 178 pregnant women with a mean age of 25.31±5.42 completed the study. Perceived self-efficacy (OR=25.23; Pprenatal care. Husband's occupation in the labor market (OR=0.43; P=0.02), unwanted pregnancy (OR=0.352; Pcare for the minors or elderly at home (OR=0.35; P=0.045) were associated with lower odds of receiving prenatal care. The model showed that when perceived efficacy of the prenatal care services overcame the perceived threat, the likelihood of prenatal care usage will increase. This study identified some modifiable factors associated with prenatal care usage by women, providing key targets for appropriate clinical interventions.

  13. Optimized parallel convolutions for non-linear fluid models of tokamak ηi turbulence

    International Nuclear Information System (INIS)

    Milovich, J.L.; Tomaschke, G.; Kerbel, G.D.

    1993-01-01

    Non-linear computational fluid models of plasma turbulence based on spectral methods typically spend a large fraction of the total computing time evaluating convolutions. Usually these convolutions arise from an explicit or semi implicit treatment of the convective non-linearities in the problem. Often the principal convective velocity is perpendicular to magnetic field lines allowing a reduction of the convolution to two dimensions in an appropriate geometry, but beyond this, different models vary widely in the particulars of which mode amplitudes are selectively evolved to get the most efficient representation of the turbulence. As the number of modes in the problem, N, increases, the amount of computation required for this part of the evolution algorithm then scales as N 2 /timestep for a direct or analytic method and N ln N/timestep for a pseudospectral method. The constants of proportionality depend on the particulars of mode selection and determine the size problem for which the method will perform equally. For large enough N, the pseudospectral method performance is always superior, though some problems do not require correspondingly high resolution. Further, the Courant condition for numerical stability requires that the timestep size must decrease proportionately as N increases, thus accentuating the need to have fast methods for larger N problems. The authors have developed a package for the Cray system which performs these convolutions for a rather arbitrary mode selection scheme using either method. The package is highly optimized using a combination of macro and microtasking techniques, as well as vectorization and in some cases assembly coded routines. Parts of the package have also been developed and optimized for the CM200 and CM5 system. Performance comparisons with respect to problem size, parallelization, selection schemes and architecture are presented

  14. An approach to computing discrete adjoints for MPI-parallelized models applied to Ice Sheet System Model 4.11

    Directory of Open Access Journals (Sweden)

    E. Larour

    2016-11-01

    Full Text Available Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly. However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1 carry out type changing through the ISSM, hence facilitating operator overloading, (2 bind to external solvers such as MUMPS and GSL-LU, and (3 handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.

  15. An Efficient Algorithm for EM Scattering from Anatomically Realistic Human Head Model Using Parallel CG-FFT Method

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2014-01-01

    Full Text Available An efficient algorithm is proposed to analyze the electromagnetic scattering problem from a high resolution head model with pixel data format. The algorithm is based on parallel technique and the conjugate gradient (CG method combined with the fast Fourier transform (FFT. Using the parallel CG-FFT method, the proposed algorithm is very efficient and can solve very electrically large-scale problems which cannot be solved using the conventional CG-FFT method in a personal computer. The accuracy of the proposed algorithm is verified by comparing numerical results with analytical Mie-series solutions for dielectric spheres. Numerical experiments have demonstrated that the proposed method has good performance on parallel efficiency.

  16. Development of whole core thermal-hydraulic analysis program ACT. 4. Simplified fuel assembly model and parallelization by MPI

    International Nuclear Information System (INIS)

    Ohshima, Hiroyuki

    2001-10-01

    A whole core thermal-hydraulic analysis program ACT is being developed for the purpose of evaluating detailed in-core thermal hydraulic phenomena of fast reactors including the effect of the flow between wrapper-tube walls (inter-wrapper flow) under various reactor operation conditions. As appropriate boundary conditions in addition to a detailed modeling of the core are essential for accurate simulations of in-core thermal hydraulics, ACT consists of not only fuel assembly and inter-wrapper flow analysis modules but also a heat transport system analysis module that gives response of the plant dynamics to the core model. This report describes incorporation of a simplified model to the fuel assembly analysis module and program parallelization by a message passing method toward large-scale simulations. ACT has a fuel assembly analysis module which can simulate a whole fuel pin bundle in each fuel assembly of the core and, however, it may take much CPU time for a large-scale core simulation. Therefore, a simplified fuel assembly model that is thermal-hydraulically equivalent to the detailed one has been incorporated in order to save the simulation time and resources. This simplified model is applied to several parts of fuel assemblies in a core where the detailed simulation results are not required. With regard to the program parallelization, the calculation load and the data flow of ACT were analyzed and the optimum parallelization has been done including the improvement of the numerical simulation algorithm of ACT. Message Passing Interface (MPI) is applied to data communication between processes and synchronization in parallel calculations. Parallelized ACT was verified through a comparison simulation with the original one. In addition to the above works, input manuals of the core analysis module and the heat transport system analysis module have been prepared. (author)

  17. COUPLING STATE-OF-THE-SCIENCE SUBSURFACE SIMULATION WITH ADVANCED USER INTERFACE AND PARALLEL VISUALIZATION: SBIR Phase I Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Hardeman, B.; Swenson, D.; Finsterle, S.; Zhou, Q.

    2008-04-30

    This is a Phase I report on a project to significantly enhance existing subsurface simulation software using leadership-class computing resources, allowing researchers to solve problems with greater speed and accuracy. Subsurface computer simulation is used for monitoring the behavior of contaminants around nuclear waste disposal and storage areas, groundwater flow, environmental remediation, carbon sequestration, methane hydrate production, and geothermal energy reservoir analysis. The Phase I project was a collaborative effort between Thunderhead Engineering (project lead and developers of a commercial pre- and post-processor for the TOUGH2 simulator) and Lawrence Berkeley National Laboratory (developers of the TOUGH2 simulator for subsurface flow). The Phase I project successfully identified the technical approaches to be implemented in Phase II.

  18. COUPLING STATE-OF-THE-SCIENCE SUBSURFACE SIMULATION WITH ADVANCED USER INTERFACE AND PARALLEL VISUALIZATION: SBIR Phase I Final Report

    International Nuclear Information System (INIS)

    Hardeman, B.; Swenson, D.; Finsterle, S.; Zhou, Q.

    2008-01-01

    This is a Phase I report on a project to significantly enhance existing subsurface simulation software using leadership-class computing resources, allowing researchers to solve problems with greater speed and accuracy. Subsurface computer simulation is used for monitoring the behavior of contaminants around nuclear waste disposal and storage areas, groundwater flow, environmental remediation, carbon sequestration, methane hydrate production, and geothermal energy reservoir analysis. The Phase I project was a collaborative effort between Thunderhead Engineering (project lead and developers of a commercial pre- and post-processor for the TOUGH2 simulator) and Lawrence Berkeley National Laboratory (developers of the TOUGH2 simulator for subsurface flow). The Phase I project successfully identified the technical approaches to be implemented in Phase II.

  19. A Three-dimensional Topological Model of Ternary Phase Diagram

    International Nuclear Information System (INIS)

    Mu, Yingxue; Bao, Hong

    2017-01-01

    In order to obtain a visualization of the complex internal structure of ternary phase diagram, the paper realized a three-dimensional topology model of ternary phase diagram with the designed data structure and improved algorithm, under the guidance of relevant theories of computer graphics. The purpose of the model is mainly to analyze the relationship between each phase region of a ternary phase diagram. The model not only obtain isothermal section graph at any temperature, but also extract a particular phase region in which users are interested. (paper)

  20. COMPARISON OF VIRTUAL FIELDS METHOD, PARALLEL NETWORK MATERIAL MODEL AND FINITE ELEMENT UPDATING FOR MATERIAL PARAMETER DETERMINATION

    Directory of Open Access Journals (Sweden)

    Florian Dirisamer

    2016-12-01

    Full Text Available Extracting material parameters from test specimens is very intensive in terms of cost and time, especially for viscoelastic material models, where the parameters are dependent of time (frequency, temperature and environmental conditions. Therefore, three different methods for extracting these parameters were tested. Firstly, digital image correlation combined with virtual fields method, secondly, a parallel network material model and thirdly, finite element updating. These three methods are shown and the results are compared in terms of accuracy and experimental effort.

  1. Implementation science: a role for parallel dual processing models of reasoning?

    Science.gov (United States)

    Sladek, Ruth M; Phillips, Paddy A; Bond, Malcolm J

    2006-01-01

    Background A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. Discussion Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important

  2. Implementation science: a role for parallel dual processing models of reasoning?

    Directory of Open Access Journals (Sweden)

    Phillips Paddy A

    2006-05-01

    Full Text Available Abstract Background A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. Discussion Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence and cognitive processing (e.g., thinking styles influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of

  3. Modelling radiative transfer through ponded first-year Arctic sea ice with a plane-parallel model

    Science.gov (United States)

    Taskjelle, Torbjørn; Hudson, Stephen R.; Granskog, Mats A.; Hamre, Børge

    2017-09-01

    Under-ice irradiance measurements were done on ponded first-year pack ice along three transects during the ICE12 expedition north of Svalbard. Bulk transmittances (400-900 nm) were found to be on average 0.15-0.20 under bare ice, and 0.39-0.46 under ponded ice. Radiative transfer modelling was done with a plane-parallel model. While simulated transmittances deviate significantly from measured transmittances close to the edge of ponds, spatially averaged bulk transmittances agree well. That is, transect-average bulk transmittances, calculated using typical simulated transmittances for ponded and bare ice weighted by the fractional coverage of the two surface types, are in good agreement with the measured values. Radiative heating rates calculated from model output indicates that about 20 % of the incident solar energy is absorbed in bare ice, and 50 % in ponded ice (35 % in pond itself, 15 % in the underlying ice). This large difference is due to the highly scattering surface scattering layer (SSL) increasing the albedo of the bare ice.

  4. Modelling radiative transfer through ponded first-year Arctic sea ice with a plane-parallel model

    Directory of Open Access Journals (Sweden)

    T. Taskjelle

    2017-09-01

    Full Text Available Under-ice irradiance measurements were done on ponded first-year pack ice along three transects during the ICE12 expedition north of Svalbard. Bulk transmittances (400–900 nm were found to be on average 0.15–0.20 under bare ice, and 0.39–0.46 under ponded ice. Radiative transfer modelling was done with a plane-parallel model. While simulated transmittances deviate significantly from measured transmittances close to the edge of ponds, spatially averaged bulk transmittances agree well. That is, transect-average bulk transmittances, calculated using typical simulated transmittances for ponded and bare ice weighted by the fractional coverage of the two surface types, are in good agreement with the measured values. Radiative heating rates calculated from model output indicates that about 20 % of the incident solar energy is absorbed in bare ice, and 50 % in ponded ice (35 % in pond itself, 15 % in the underlying ice. This large difference is due to the highly scattering surface scattering layer (SSL increasing the albedo of the bare ice.

  5. Implementing a Four-Phase Curriculum Review Model

    Science.gov (United States)

    LaCursia, Nancy

    2010-01-01

    This article describes how to implement the four-phase curriculum review model, a simplified process for renovating a high school health and physical education curriculum. Although this model was used at a large suburban high school, it could be adapted for use by smaller schools or other disciplines. The four phases of this model are: (1) needs…

  6. Regularity of solutions of a phase field model

    KAUST Repository

    Amler, Thomas

    2013-01-01

    Phase field models are widely-used for modelling phase transition processes such as solidification, freezing or CO2 sequestration. In this paper, a phase field model proposed by G. Caginalp is considered. The existence and uniqueness of solutions are proved in the case of nonsmooth initial data. Continuity of solutions with respect to time is established. In particular, it is shown that the governing initial boundary value problem can be considered as a dynamical system. © 2013 International Press.

  7. Speed Sensorless vector control of parallel-connected three-phase two-motor single-inverter drive system

    DEFF Research Database (Denmark)

    Gunabalan, Ramachandiran; Sanjeevikumar, Padmanaban; Blaabjerg, Frede

    2016-01-01

    observer design technique is known for its simple construction, which estimates the speed and rotor fluxes. Load torque is estimated by load torque adaptation and the average rotor flux was maintained constant by rotor flux feedback control. The technique’s convergence rate is very fast and is robust......This paper presents the characteristic behavior of direct vector control of two induction motors with sensorless speed feedback having the same rating parameters, paralleled combination, and supplied from a single current-controlled pulse-width-modulated voltage-source inverter drive. Natural...... to noise and parameter uncertainty. The gain matrix is absent in the natural observer. The rotor speed is estimated from the load torque, stator current, and rotor flux. Under symmetrical load conditions, the difference in speed between two induction motors is reduced by considering the motor parameters...

  8. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  9. Parallel implementation of a Lagrangian-based model on an adaptive mesh in C++: Application to sea-ice

    Science.gov (United States)

    Samaké, Abdoulaye; Rampal, Pierre; Bouillon, Sylvain; Ólason, Einar

    2017-12-01

    We present a parallel implementation framework for a new dynamic/thermodynamic sea-ice model, called neXtSIM, based on the Elasto-Brittle rheology and using an adaptive mesh. The spatial discretisation of the model is done using the finite-element method. The temporal discretisation is semi-implicit and the advection is achieved using either a pure Lagrangian scheme or an Arbitrary Lagrangian Eulerian scheme (ALE). The parallel implementation presented here focuses on the distributed-memory approach using the message-passing library MPI. The efficiency and the scalability of the parallel algorithms are illustrated by the numerical experiments performed using up to 500 processor cores of a cluster computing system. The performance obtained by the proposed parallel implementation of the neXtSIM code is shown being sufficient to perform simulations for state-of-the-art sea ice forecasting and geophysical process studies over geographical domain of several millions squared kilometers like the Arctic region.

  10. Final Report: Simulation Tools for Parallel Microwave Particle in Cell Modeling

    International Nuclear Information System (INIS)

    Stoltz, Peter H.

    2008-01-01

    Transport of high-power rf fields and the subsequent deposition of rf power into plasma is an important component of developing tokamak fusion energy. Two limitations on rf heating are: (i) breakdown of the metallic structures used to deliver rf power to the plasma, and (ii) a detailed understanding of how rf power couples into a plasma. Computer simulation is a main tool for helping solve both of these problems, but one of the premier tools, VORPAL, is traditionally too difficult to use for non-experts. During this Phase II project, we developed the VorpalView user interface tool. This tool allows Department of Energy researchers a fully graphical interface for analyzing VORPAL output to more easily model rf power delivery and deposition in plasmas.

  11. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  12. Steady-state and time-dependent modelling of parallel transport in the scrape-off layer

    DEFF Research Database (Denmark)

    Havlickova, E.; Fundamenski, W.; Naulin, Volker

    2011-01-01

    temperature calculated in SOLF1D is compared with the approximative model used in the turbulence code ESEL both for steady-state and turbulent SOL. Dynamics of the parallel transport are investigated for a simple transient event simulating the propagation of particles and energy to the targets from a blob......The one-dimensional fluid code SOLF1D has been used for modelling of plasma transport in the scrape-off layer (SOL) along magnetic field lines, both in steady state and under transient conditions that arise due to plasma turbulence. The presented work summarizes results of SOLF1D with attention...... given to transient parallel transport which reveals two distinct time scales due to the transport mechanisms of convection and diffusion. Time-dependent modelling combined with the effect of ballooning shows propagation of particles along the magnetic field line with Mach number up to M ≈ 1...

  13. Modeling and design of reacting systems with phase transfer catalysis

    DEFF Research Database (Denmark)

    Piccolo, Chiara; Hodges, George; Piccione, Patrick M.

    2011-01-01

    Issues related to the design of biphasic (liquid) catalytic reaction operations are discussed. A chemical system involving the reaction of an organic-phase soluble reactant (A) with an aqueous-phase soluble reactant (B) in the presence of phase transfer catalyst (PTC) is modeled and based on it, ...

  14. Evaluation of a subject-specific female gymnast model and simulation of an uneven parallel bar swing.

    Science.gov (United States)

    Sheets, Alison L; Hubbard, Mont

    2008-11-14

    A gymnast model and forward dynamics simulation of a dismount preparation swing on the uneven parallel bars were evaluated by comparing experimental and predicted joint positions throughout the maneuver. The bar model was a linearly elastic spring with a frictional bar/hand interface, and the gymnast model consisted of torso/head, arm and two leg segments. The hips were frictionless balls and sockets, and shoulder movement was planar with passive compliant structures approximated by a parallel spring and damper. Subject-specific body segment moments of inertia, and shoulder compliance were estimated. Muscles crossing the shoulder and hip were represented as torque generators, and experiments quantified maximum instantaneous torques as functions of joint angle and angular velocity. Maximum torques were scaled by joint torque activations as functions of time to produce realistic motions. The downhill simplex method optimized activations and simulation initial conditions to minimize the difference between experimental and predicted bar-center, shoulder, hip, and ankle positions. Comparing experimental and simulated performances allowed evaluation of bar, shoulder compliance, joint torque, and gymnast models. Errors in all except the gymnast model are random, zero mean, and uncorrelated, verifying that all essential system features are represented. Although the swing simulation using the gymnast model matched experimental joint positions with a 2.15cm root-mean-squared error, errors are correlated. Correlated errors indicate that the gymnast model is not complex enough to exactly reproduce the experimental motion. Possible model improvements including a nonlinear shoulder model with active translational control and a two-segment torso would not have been identified if the objective function did not evaluate the entire system configuration throughout the motion. The model and parameters presented in this study can be effectively used to understand and improve an uneven

  15. A counting process model of survival of parallel load-sharing system

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr; Linka, A.

    2001-01-01

    Roč. 37, č. 1 (2001), s. 47-60 ISSN 0023-5954 R&D Projects: GA ČR GA402/98/0472; GA MŠk VS97084 Institutional research plan: AV0Z1075907 Keywords : reliability * mathematical statistics * parallel system Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.316, year: 2001

  16. Sparse Probabilistic Parallel Factor Analysis for the Modeling of PET and Task-fMRI Data

    DEFF Research Database (Denmark)

    Beliveau, Vincent; Papoutsakis, Georgios; Hinrich, Jesper Løve

    2017-01-01

    Modern datasets are often multiway in nature and can contain patterns common to a mode of the data (e.g. space, time, and subjects). Multiway decomposition such as parallel factor analysis (PARAFAC) take into account the intrinsic structure of the data, and sparse versions of these methods improve...

  17. Use of massively parallel computing to improve modelling accuracy within the nuclear sector

    Directory of Open Access Journals (Sweden)

    L M Evans

    2016-06-01

    This work presents recent advancements in three techniques: Uncertainty quantification (UQ; Cellular automata finite element (CAFE; Image based finite element methods (IBFEM. Case studies are presented demonstrating their suitability for use in nuclear engineering made possible by advancements in parallel computing hardware that is projected to be available for industry within the next decade costing of the order of $100k.

  18. Parallel Process and Isomorphism: A Model for Decision Making in the Supervisory Triad

    Science.gov (United States)

    Koltz, Rebecca L.; Odegard, Melissa A.; Feit, Stephen S.; Provost, Kent; Smith, Travis

    2012-01-01

    Parallel process and isomorphism are two supervisory concepts that are often discussed independently but rarely discussed in connection with each other. These two concepts, philosophically, have different historical roots, as well as different implications for interventions with regard to the supervisory triad. The authors examine the difference…

  19. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h ‑. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h ‑) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  20. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    Science.gov (United States)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  1. Model of Dynamic Pricing for Two Parallels Flights with Multiple Fare Classes Based on Passenger Choice Behavior

    Directory of Open Access Journals (Sweden)

    Ahmad Rusdiansyah

    2010-01-01

    Full Text Available Airline revenue management (ARM is one of emerging topics in transportation logistics areas. This paper discusses a problem in ARM which is dynamic pricing for two parallel flights owned by the same airline. We extended the existing model on Joint Pricing Model for Parallel Flights under passenger choice behavior in the literature. We generalized the model to consider multiple full-fare class instead of only single full-fare class. Consequently, we have to define the seat allocation for each fare class beforehand. We have combined the joint pricing model and the model of nested Expected Marginal Seat Revenue (EMSR model. To solve this hybrid model, we have developed a dynamic programming-based algorithm. We also have conducted numerical experiments to show the behavior of our model. Our experiment results have showed that the expected revenue of both flights significantly induced by the proportion of the time flexible passengers and the number of allocated seat in each full-fare class. As managerial insights, our model has proved that there is a closed relationship between demand management, which is represented by the price of each fare class, and total expected revenue considering the passenger choice behavior.

  2. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    Science.gov (United States)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  3. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  4. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  5. Dynamic and Oscillatory Motions of Cable-Driven Parallel Robots Based on a Nonlinear Cable Tension Model

    OpenAIRE

    Baklouti , Sana; Courteille , Eric; Caro , Stéphane; DKHIL , Mohamed

    2017-01-01

    International audience; In this paper, dynamic modeling of cable-driven parallel robots (CDPRs) is addressed where each cable length is subjected to variations during operation. It is focusing on an original formulation of cable tension, which reveals a softening behavior when strains become large. The dynamic modulus of cable elasticity is experimentally identified through Dynamic Mechanical Analysis (DMA). Numerical investigations carried out on suspended CDPRs with different sizes show the...

  6. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  7. Phase-space dynamics of Bianchi IX cosmological models

    International Nuclear Information System (INIS)

    Soares, I.D.

    1985-01-01

    The complex phase-space dynamical behaviour of a class of Biachi IX cosmological models is discussed, as the chaotic gravitational collapse due Poincare's homoclinic phenomena, and the n-furcation of periodic orbits and tori in the phase space of the models. Poincare maps which show this behaviour are constructed merically and applications are discussed. (Author) [pt

  8. Model for pairing phase transition in atomic nuclei

    International Nuclear Information System (INIS)

    Schiller, A.; Guttormsen, M.; Hjorth-Jensen, M.; Rekstad, J.; Siem, S.

    2002-01-01

    A model is developed which allows the investigation and classification of the pairing phase transition in atomic nuclei. The regions of the parameter space are discussed for which a pairing phase transition can be observed. The model parameters include number of particles, attenuation of pairing correlations with increasing seniority, single-particle level spacing, and pairing gap parameter

  9. Integrating Task and Data Parallelism

    OpenAIRE

    Massingill, Berna

    1993-01-01

    Many models of concurrency and concurrent programming have been proposed; most can be categorized as either task-parallel (based on functional decomposition) or data-parallel (based on data decomposition). Task-parallel models are most effective for expressing irregular computations; data-parallel models are most effective for expressing regular computations. Some computations, however, exhibit both regular and irregular aspects. For such computations, a better programming model is one that i...

  10. Model Predictive Control of Three Phase Inverter for PV Systems

    OpenAIRE

    Irtaza M. Syed; Kaamran Raahemifar

    2015-01-01

    This paper presents a model predictive control (MPC) of a utility interactive three phase inverter (TPI) for a photovoltaic (PV) system at commercial level. The proposed model uses phase locked loop (PLL) to synchronize the TPI with the power electric grid (PEG) and performs MPC control in a dq reference frame. TPI model consists of a boost converter (BC), maximum power point tracking (MPPT) control, and a three-leg voltage source inverter (VSI). The operational model of ...

  11. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  12. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek

    2016-04-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Optimal dynamic remapping of data parallel computations

    Science.gov (United States)

    Nicol, David M.; Reynolds, Paul F., Jr.

    1990-01-01

    A large class of data parallel computations is characterized by a sequence of phases, with phase changes occurring unpredictably. Dynamic remapping of the workload to processors may be required to maintain good performance. The problem considered, for which the utility of remapping and the future behavior of the workload are uncertain, arises when phases exhibit stable execution requirements during a given phase, but requirements change radically between phases. For these situations, a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The authors address the fundamental problem of balancing the expected remapping performance gain against the delay cost, and they derive the optimal remapping decision policy. The promise of the approach is shown by application to multiprocessor implementations of an adaptive gridding fluid dynamics program and to a battlefield simulation program.

  15. Phase Chaos and Multistability in the Discrete Kuramoto Model

    DEFF Research Database (Denmark)

    Maistrenko, V. L.; Vasylenko, A. A.; Maistrenko, Y. L.

    2008-01-01

    The paper describes the appearance of a novel high-dimensional chaotic regime, called phase chaos, in the discrete Kuramoto model of globally coupled phase oscillators. This type of chaos is observed at small and intermediate values of the coupling strength. It is caused by the nonlinear interact......The paper describes the appearance of a novel high-dimensional chaotic regime, called phase chaos, in the discrete Kuramoto model of globally coupled phase oscillators. This type of chaos is observed at small and intermediate values of the coupling strength. It is caused by the nonlinear...... interaction of the oscillators, while the individual oscillators behave periodically when left uncoupled. For the four-dimensional discrete Kuramoto model, we outline the region of phase chaos in the parameter plane, distinguish the region where the phase chaos coexists with other periodic attractors...

  16. Modelling of phase transformations in substitutional alloys

    Czech Academy of Sciences Publication Activity Database

    Svoboda, Jiří; Vala, J.; Gamsjäger, E.; Fischer, F. D.

    237-240, - (2005), s. 647-652 ISSN 1012-0386. [DIMAT 2004 /6./. Krakow, 18.07.2004-23.07.2004] R&D Projects: GA AV ČR(CZ) 1QS200410502 Institutional research plan: CEZ:AV0Z20410507 Keywords : phase transformations Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.483, year: 2005

  17. Contribution to the optimal design of an hybrid parallel power-train: choice of a battery model; Contribution a la conception optimale d'une motorisation hybride parallele. Choix d'un modele d'accumulateur

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, E.

    2004-09-15

    This work deals with the dynamical and energetic modeling of a 42 V NiMH battery, the model of which is taking into account into a control law for an hybrid electrical vehicle. Using an inventory of the electrochemical phenomena, an equivalent electrical scheme has been established. In this model, diffusion phenomena were represented using non integer derivatives. This tool leads to a very good approximation of diffusion phenomena, nevertheless such a pure mathematical approach did not allow to represent energetic losses inside the battery. Consequently, a second model, made of a series of electric circuits has been proposed to represent energetic transfers. This second model has been used in the determination of a control law which warrants an autonomous management of electrical energy embedded in a parallel hybrid electrical vehicle, and to prevent deep discharge of the battery. (author)

  18. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  19. Employment, Production and Consumption model: Patterns of phase transitions

    Science.gov (United States)

    Lavička, H.; Lin, L.; Novotný, J.

    2010-04-01

    We have simulated the model of Employment, Production and Consumption (EPC) using Monte Carlo. The EPC model is an agent based model that mimics very basic rules of industrial economy. From the perspective of physics, the nature of the interactions in the EPC model represents multi-agent interactions where the relations among agents follow the key laws for circulation of capital and money. Monte Carlo simulations of the stochastic model reveal phase transition in the model economy. The two phases are the phase with full unemployment and the phase with nearly full employment. The economy switches between these two states suddenly as a reaction to a slight variation in the exogenous parameter, thus the system exhibits strong non-linear behavior as a response to the change of the exogenous parameters.

  20. Rigid-flexible coupling dynamic modeling and investigation of a redundantly actuated parallel manipulator with multiple actuation modes

    Science.gov (United States)

    Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying

    2017-09-01

    A systematic dynamic modeling methodology is presented to develop the rigid-flexible coupling dynamic model (RFDM) of an emerging flexible parallel manipulator with multiple actuation modes. By virtue of assumed mode method, the general dynamic model of an arbitrary flexible body with any number of lumped parameters is derived in an explicit closed form, which possesses the modular characteristic. Then the completely dynamic model of system is formulated based on the flexible multi-body dynamics (FMD) theory and the augmented Lagrangian multipliers method. An approach of combining the Udwadia-Kalaba formulation with the hybrid TR-BDF2 numerical algorithm is proposed to address the nonlinear RFDM. Two simulation cases are performed to investigate the dynamic performance of the manipulator with different actuation modes. The results indicate that the redundant actuation modes can effectively attenuate vibration and guarantee higher dynamic performance compared to the traditional non-redundant actuation modes. Finally, a virtual prototype model is developed to demonstrate the validity of the presented RFDM. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and controller design of other planar flexible parallel manipulators, especially the emerging ones with multiple actuation modes.

  1. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  2. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  3. Condition-based maintenance effectiveness for series–parallel power generation system—A combined Markovian simulation model

    International Nuclear Information System (INIS)

    Azadeh, A.; Asadzadeh, S.M.; Salehi, N.; Firoozi, M.

    2015-01-01

    Condition-based maintenance (CBM) is an increasingly applicable policy in the competitive marketplace as a means of improving equipment reliability and efficiency. Not only has maintenance a close relationship with safety but its costs also make it even more attractive issue for researchers. This study proposes a model to evaluate the effectiveness of CBM policy compared to two other maintenance policies: Corrective Maintenance (CM) and Preventive Maintenance (PM). Maintenance policies are compared through two system performance indicators: reliability and cost. To estimate the reliability and costs of the system, the proposed Markovian discrete-event simulation model is developed under each of these policies. The applicability and usefulness of the proposed Markovian simulation model is illustrated for a series–parallel power generation system. The simulated characteristics of CBM system include its prognostics efficiency to estimate remaining useful life of the equipment. Results show that with an efficient prognostics, CBM policy is an effective strategy compared to other maintenance strategies. - Highlights: • A model is developed to evaluate the effectiveness of CBM policy. • Maintenance policies are compared through reliability and cost. • A Markovian simulation model is developed. • A series–parallel power generation system is considered. • CBM is an effective strategy compared to others

  4. Characteristics of the chiral phase transition in nonlocal quark models

    International Nuclear Information System (INIS)

    Gomez Dumm, D. Gomez; Scoccola, N.N.

    2005-01-01

    The characteristics of the chiral phase transition are analyzed within the framework of chiral quark models with nonlocal interactions in the mean-field approximation. In the chiral limit, we develop a semianalytic framework that allows us to explicitly determine the phase transition curve, the position of the critical points, some relevant critical exponents, etc. For the case of finite current quark masses, we show the behavior of various thermodynamical and chiral response functions across the phase transition

  5. A model for phase noise generation in amplifiers.

    Science.gov (United States)

    Tomlin, T D; Fynn, K; Cantoni, A

    2001-11-01

    In this paper, a model is presented for predicting the phase modulation (PM) and amplitude modulation (AM) noise in bipolar junction transistor (BJT) amplifiers. The model correctly predicts the dependence of phase noise on the signal frequency (at a particular carrier offset frequency), explains the noise shaping of the phase noise about the signal frequency, and shows the functional dependence on the transistor parameters and the circuit parameters. Experimental studies on common emitter (CE) amplifiers have been used to validate the PM noise model at carrier frequencies between 10 and 100 MHz.

  6. Parallel communicating grammar systems with context-free components are Turing complete for any communication model

    Directory of Open Access Journals (Sweden)

    Wilkin Mary Sarah Ruth

    2016-12-01

    Full Text Available Parallel Communicating Grammar Systems (PCGS were introduced as a language-theoretic treatment of concurrent systems. A PCGS extends the concept of a grammar to a structure that consists of several grammars working in parallel, communicating with each other, and so contributing to the generation of strings. PCGS are usually more powerful than a single grammar of the same type; PCGS with context-free components (CF-PCGS in particular were shown to be Turing complete. However, this result only holds when a specific type of communication (which we call broadcast communication, as opposed to one-step communication is used. We expand the original construction that showed Turing completeness so that broadcast communication is eliminated at the expense of introducing a significant number of additional, helper component grammars. We thus show that CF-PCGS with one-step communication are also Turing complete. We introduce in the process several techniques that may be usable in other constructions and may be capable of removing broadcast communication in general.

  7. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  8. HYTEST Phase I Facility Commissioning and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lee P. Shunn; Richard D. Boardman; Shane J. Cherry; Craig G. Rieger

    2009-09-01

    The purpose of this document is to report the first year accomplishments of two coordinated Laboratory Directed Research and Development (LDRD) projects that utilize a hybrid energy testing laboratory that couples various reactors to investigate system reactance behavior. This work is the first phase of a series of hybrid energy research and testing stations - referred to hereafter as HYTEST facilities – that are planned for construction and operation at the Idaho National Laboratory (INL). A HYTEST Phase I facility was set up and commissioned in Bay 9 of the Bonneville County Technology Center (BCTC). The purpose of this facility is to utilize the hydrogen and oxygen that is produced by the High Temperature Steam Electrolysis test reactors operating in Bay 9 to support the investigation of kinetic phenomena and transient response of integrated reactor components. This facility provides a convenient scale for conducting scoping tests of new reaction concepts, materials performance, new instruments, and real-time data collection and manipulation for advance process controls. An enclosed reactor module was assembled and connected to a new ventilation system equipped with a variable-speed exhaust blower to mitigate hazardous gas exposures, as well as contract with hot surfaces. The module was equipped with a hydrogen gas pump and receiver tank to supply high quality hydrogen to chemical reactors located in the hood.

  9. Modeling the Distinct Phases of Skill Acquisition

    Science.gov (United States)

    Tenison, Caitlin; Anderson, John R.

    2016-01-01

    A focus of early mathematics education is to build fluency through practice. Several models of skill acquisition have sought to explain the increase in fluency because of practice by modeling both the learning mechanisms driving this speedup and the changes in cognitive processes involved in executing the skill (such as transitioning from…

  10. Two-phase flow experimental studies in micro-models

    NARCIS (Netherlands)

    Karadimitriou, N.K.

    2013-01-01

    The aim of this research project was to put more physics into theories of two-phase flow. The significance of including interfacial area as a separate variable in two-phase flow and transport models was investigated. In order to investigate experimentally the significance of the inclusion of

  11. The Parallelized Large-Eddy Simulation Model (PALM version 4.0 for atmospheric and oceanic flows: model formulation, recent developments, and future perspectives

    Directory of Open Access Journals (Sweden)

    B. Maronga

    2015-08-01

    Full Text Available In this paper we present the current version of the Parallelized Large-Eddy Simulation Model (PALM whose core has been developed at the Institute of Meteorology and Climatology at Leibniz Universität Hannover (Germany. PALM is a Fortran 95-based code with some Fortran 2003 extensions and has been applied for the simulation of a variety of atmospheric and oceanic boundary layers for more than 15 years. PALM is optimized for use on massively parallel computer architectures and was recently ported to general-purpose graphics processing units. In the present paper we give a detailed description of the current version of the model and its features, such as an embedded Lagrangian cloud model and the possibility to use Cartesian topography. Moreover, we discuss recent model developments and future perspectives for LES applications.

  12. The Parallelized Large-Eddy Simulation Model (PALM) version 4.0 for atmospheric and oceanic flows: model formulation, recent developments, and future perspectives

    Science.gov (United States)

    Maronga, B.; Gryschka, M.; Heinze, R.; Hoffmann, F.; Kanani-Sühring, F.; Keck, M.; Ketelsen, K.; Letzel, M. O.; Sühring, M.; Raasch, S.

    2015-08-01

    In this paper we present the current version of the Parallelized Large-Eddy Simulation Model (PALM) whose core has been developed at the Institute of Meteorology and Climatology at Leibniz Universität Hannover (Germany). PALM is a Fortran 95-based code with some Fortran 2003 extensions and has been applied for the simulation of a variety of atmospheric and oceanic boundary layers for more than 15 years. PALM is optimized for use on massively parallel computer architectures and was recently ported to general-purpose graphics processing units. In the present paper we give a detailed description of the current version of the model and its features, such as an embedded Lagrangian cloud model and the possibility to use Cartesian topography. Moreover, we discuss recent model developments and future perspectives for LES applications.

  13. Inflammation-driven bone formation in a mouse model of ankylosing spondylitis: sequential not parallel processes.

    Science.gov (United States)

    Tseng, Hsu-Wen; Pitt, Miranda E; Glant, Tibor T; McRae, Allan F; Kenna, Tony J; Brown, Matthew A; Pettit, Allison R; Thomas, Gethin P

    2016-01-29

    Ankylosing spondylitis (AS) is an immune-mediated arthritis particularly targeting the spine and pelvis and is characterised by inflammation, osteoproliferation and frequently ankylosis. Current treatments that predominately target inflammatory pathways have disappointing efficacy in slowing disease progression. Thus, a better understanding of the causal association and pathological progression from inflammation to bone formation, particularly whether inflammation directly initiates osteoproliferation, is required. The proteoglycan-induced spondylitis (PGISp) mouse model of AS was used to histopathologically map the progressive axial disease events, assess molecular changes during disease progression and define disease progression using unbiased clustering of semi-quantitative histology. PGISp mice were followed over a 24-week time course. Spinal disease was assessed using a novel semi-quantitative histological scoring system that independently evaluated the breadth of pathological features associated with PGISp axial disease, including inflammation, joint destruction and excessive tissue formation (osteoproliferation). Matrix components were identified using immunohistochemistry. Disease initiated with inflammation at the periphery of the intervertebral disc (IVD) adjacent to the longitudinal ligament, reminiscent of enthesitis, and was associated with upregulated tumor necrosis factor and metalloproteinases. After a lag phase, established inflammation was temporospatially associated with destruction of IVDs, cartilage and bone. At later time points, advanced disease was characterised by substantially reduced inflammation, excessive tissue formation and ectopic chondrocyte expansion. These distinct features differentiated affected mice into early, intermediate and advanced disease stages. Excessive tissue formation was observed in vertebral joints only if the IVD was destroyed as a consequence of the early inflammation. Ectopic excessive tissue was predominantly

  14. Modeling, realization and evaluation of a parallel architecture for the data acquisition in multidetectors

    International Nuclear Information System (INIS)

    Guirande, Ph.; Aleonard, M-M.; Dien, Q-T.; Pedroza, J-L.

    1997-01-01

    The efficiency increasing in four π (EUROGAM, EUROBALL, DIAMANT) is achieved by an increase in the granularity, hence in the event counting rate in the acquisition system. Consequently, an evolution of the architecture of readout systems, coding and software is necessary. To achieve the required evaluation we have implemented a parallel architecture to check the quality of the events. The first application of this architecture was to make available an improved data acquisition system for the DIAMANT multidetector. The data acquisition system of DIAMANT is based on an ensemble of VME cards which must manage: the event readout, their salvation on magnetic support and histogram construction. The ensemble consists of processors distributed in a net, a workstation to control the experiment and a display system for spectra and arrays. In such architecture the task of VME bus becomes quickly a limitation for performances not only for the data transfer but also for coordination of different processors. The parallel architecture used makes the VME bus operation easy. It is based on three DSP C40 (Digital Signal Processor) implanted in a commercial (LSI) VME. It is provided with an external bus used to read the raw data from an interface card (ROCVI) between the 32 bit ECL bus reading the real time VME-based encoders. The performed tests have evidenced jamming after data exchanges between the processors using two communication lines. The analysis of this problem has indicated the necessity of dynamical changes of tasks to avoid this blocking. Intrinsic evaluation (i.e. without transfer on the VME bus) has been carried out for two parallel topologies (processor farm and tree). The simulation software permitted the generation of event packets. The obtained rates are sensibly equivalent (6 Mo/s) independent of topology. The farm topology has been chosen because it is simple to implant. The charge evaluation has reduced the rate in 'simplex' communication mode to 5.3 Mo/s and

  15. Generalized Reduced Order Model Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  16. Marshal: Maintaining Evolving Models, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SIFT proposes to design and develop the Marshal system, a mixed-initiative tool for maintaining task models over the course of evolving missions. Marshal-enabled...

  17. Marshal: Maintaining Evolving Models, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SIFT proposes to design and develop the Marshal system, a mixed-initiative tool for maintaining task models over the course of evolving missions. SIFT will...

  18. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  19. Advanced Spacecraft Thermal Modeling, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — For spacecraft developers who spend millions to billions of dollars per unit and require 3 to 7 years to deploy, the LoadPath reduced-order (RO) modeling thermal...

  20. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  1. Carbon footprint estimator, phase II : volume I - GASCAP model.

    Science.gov (United States)

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  2. Characterization and Computational Modeling of Minor Phases in Alloy LSHR

    Science.gov (United States)

    Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek

    2012-01-01

    The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.

  3. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  4. SEISMIC SIMULATIONS USING PARALLEL COMPUTING AND THREE-DIMENSIONAL EARTH MODELS TO IMPROVE NUCLEAR EXPLOSION PHENOMENOLOGY AND MONITORING

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A; Matzel, E; Pasyanos, M; Petersson, A; Sjogreen, B; Bono, C; Vorobiev, O; Antoun, T; Walter, W; Myers, S; Lomov, I

    2008-07-07

    The development of accurate numerical methods to simulate wave propagation in three-dimensional (3D) earth models and advances in computational power offer exciting possibilities for modeling the motions excited by underground nuclear explosions. This presentation will describe recent work to use new numerical techniques and parallel computing to model earthquakes and underground explosions to improve understanding of the wave excitation at the source and path-propagation effects. Firstly, we are using the spectral element method (SEM, SPECFEM3D code of Komatitsch and Tromp, 2002) to model earthquakes and explosions at regional distances using available 3D models. SPECFEM3D simulates anelastic wave propagation in fully 3D earth models in spherical geometry with the ability to account for free surface topography, anisotropy, ellipticity, rotation and gravity. Results show in many cases that 3D models are able to reproduce features of the observed seismograms that arise from path-propagation effects (e.g. enhanced surface wave dispersion, refraction, amplitude variations from focusing and defocusing, tangential component energy from isotropic sources). We are currently investigating the ability of different 3D models to predict path-specific seismograms as a function of frequency. A number of models developed using a variety of methodologies are available for testing. These include the WENA/Unified model of Eurasia (e.g. Pasyanos et al 2004), the global CUB 2.0 model (Shapiro and Ritzwoller, 2002), the partitioned waveform model for the Mediterranean (van der Lee et al., 2007) and stochastic models of the Yellow Sea Korean Peninsula region (Pasyanos et al., 2006). Secondly, we are extending our Cartesian anelastic finite difference code (WPP of Nilsson et al., 2007) to model the effects of free-surface topography. WPP models anelastic wave propagation in fully 3D earth models using mesh refinement to increase computational speed and improve memory efficiency. Thirdly

  5. Electromagnetic sunscreen model: implementation and comparison between several methods: step-film model, differential method, Mie scattering, and scattering by a set of parallel cylinders.

    Science.gov (United States)

    Lécureux, Marie; Enoch, Stefan; Deumié, Carole; Tayeb, Gérard

    2014-10-01

    Sunscreens protect from UV radiation, a carcinogen also responsible for sunburns and age-associated dryness. In order to anticipate the transmission of light through UV protection containing scattering particles, we implement electromagnetic models, using numerical methods for solving Maxwell's equations. After having our models validated, we compare several calculation methods: differential method, scattering by a set of parallel cylinders, or Mie scattering. The field of application and benefits of each method are studied and examples using the appropriate method are described.

  6. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  7. A multi-phase flow model for electrospinning process

    Directory of Open Access Journals (Sweden)

    Xu Lan

    2013-01-01

    Full Text Available An electrospinning process is a multi-phase and multi-physicical process with flow, electric and magnetic fields coupled together. This paper deals with establishing a multi-phase model for numerical study and explains how to prepare for nanofibers and nanoporous materials. The model provides with a powerful tool to controlling over electrospinning parameters such as voltage, flow rate, and others.

  8. Parallel CFD simulation of flow in a 3D model of vibrating human vocal folds

    Czech Academy of Sciences Publication Activity Database

    Šidlof, Petr; Horáček, Jaromír; Řidký, V.

    2013-01-01

    Roč. 80, č. 1 (2013), s. 290-300 ISSN 0045-7930 R&D Projects: GA ČR(CZ) GAP101/11/0207 Institutional research plan: CEZ:AV0Z20760514 Keywords : numerical simulation * vocal folds * glottal airflow * inite volume method * parallel CFD Subject RIV: BI - Acoustics Impact factor: 1.532, year: 2013 http://www.sciencedirect.com/science?_ob=ArticleListURL&_method=list&_ArticleListID=-268060849&_sort=r&_st=13&view=c&_acct=C000034318&_version=1&_urlVersion=0&_userid=640952&md5=7c5b5539857ee9a02af5e690585b3126&searchtype=a

  9. Phase field modeling of flexoelectricity in solid dielectrics

    Science.gov (United States)

    Chen, H. T.; Zhang, S. D.; Soh, A. K.; Yin, W. Y.

    2015-07-01

    A phase field model is developed to study the flexoelectricity in nanoscale solid dielectrics, which exhibit both structural and elastic inhomogeneity. The model is established for an elastic homogeneous system by taking into consideration all the important non-local interactions, such as electrostatic, elastic, polarization gradient, as well as flexoelectric terms. The model is then extended to simulate a two-phase system with strong elastic inhomogeneity. Both the microscopic domain structures and the macroscopic effective piezoelectricity are thoroughly studied using the proposed model. The results obtained show that the largest flexoelectric induced polarization exists at the interface between the matrix and the inclusion. The effective piezoelectricity is greatly influenced by the inclusion size, volume fraction, elastic stiffness, and the applied stress. The established model in the present study can provide a fundamental framework for computational study of flexoelectricity in nanoscale solid dielectrics, since various boundary conditions can be easily incorporated into the phase field model.

  10. A stochastic phase-field model determined from molecular dynamics

    KAUST Repository

    von Schwerin, Erik

    2010-03-17

    The dynamics of dendritic growth of a crystal in an undercooled melt is determined by macroscopic diffusion-convection of heat and by capillary forces acting on the nanometer scale of the solid-liquid interface width. Its modelling is useful for instance in processing techniques based on casting. The phase-field method is widely used to study evolution of such microstructural phase transformations on a continuum level; it couples the energy equation to a phenomenological Allen-Cahn/Ginzburg-Landau equation modelling the dynamics of an order parameter determining the solid and liquid phases, including also stochastic fluctuations to obtain the qualitatively correct result of dendritic side branching. This work presents a method to determine stochastic phase-field models from atomistic formulations by coarse-graining molecular dynamics. It has three steps: (1) a precise quantitative atomistic definition of the phase-field variable, based on the local potential energy; (2) derivation of its coarse-grained dynamics model, from microscopic Smoluchowski molecular dynamics (that is Brownian or over damped Langevin dynamics); and (3) numerical computation of the coarse-grained model functions. The coarse-grained model approximates Gibbs ensemble averages of the atomistic phase-field, by choosing coarse-grained drift and diffusion functions that minimize the approximation error of observables in this ensemble average. © EDP Sciences, SMAI, 2010.

  11. On the chiral phase transition in the linear sigma model

    International Nuclear Information System (INIS)

    Tran Huu Phat; Nguyen Tuan Anh; Le Viet Hoa

    2003-01-01

    The Cornwall- Jackiw-Tomboulis (CJT) effective action for composite operators at finite temperature is used to investigate the chiral phase transition within the framework of the linear sigma model as the low-energy effective model of quantum chromodynamics (QCD). A new renormalization prescription for the CJT effective action in the Hartree-Fock (HF) approximation is proposed. A numerical study, which incorporates both thermal and quantum effect, shows that in this approximation the phase transition is of first order. However, taking into account the higher-loop diagrams contribution the order of phase transition is unchanged. (author)

  12. Kaleidoscope of exotic quantum phases in a frustrated XY model.

    Science.gov (United States)

    Varney, Christopher N; Sun, Kai; Galitski, Victor; Rigol, Marcos

    2011-08-12

    The existence of quantum spin liquids was first conjectured by Pomeranchuk some 70 years ago, who argued that frustration in simple antiferromagnetic theories could result in a Fermi-liquid-like state for spinon excitations. Here we show that a simple quantum spin model on a honeycomb lattice hosts the long sought for Bose metal with a clearly identifiable Bose surface. The complete phase diagram of the model is determined via exact diagonalization and is shown to include four distinct phases separated by three quantum phase transitions.

  13. Dynamic Model and Vibration Characteristics of Planar 3-RRR Parallel Manipulator with Flexible Intermediate Links considering Exact Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Lianchao Sheng

    2017-01-01

    Full Text Available Due to the complexity of the dynamic model of a planar 3-RRR flexible parallel manipulator (FPM, it is often difficult to achieve active vibration control algorithm based on the system dynamic model. To establish a simple and efficient dynamic model of the planar 3-RRR FPM to study its dynamic characteristics and build a controller conveniently, firstly, considering the effect of rigid-flexible coupling and the moment of inertia at the end of the flexible intermediate link, the modal function is determined with the pinned-free boundary condition. Then, considering the main vibration modes of the system, a high-efficiency coupling dynamic model is established on the basis of guaranteeing the model control accuracy. According to the model, the modal characteristics of the flexible intermediate link are analyzed and compared with the modal test results. The results show that the model can effectively reflect the main vibration modes of the planar 3-RRR FPM; in addition the model can be used to analyze the effects of inertial and coupling forces on the dynamics model and the drive torque of the drive motor. Because this model is of the less dynamic parameters, it is convenient to carry out the control program.

  14. Mathematical modeling and the two-phase constitutive equations

    International Nuclear Information System (INIS)

    Boure, J.A.

    1975-01-01

    The problems raised by the mathematical modeling of two-phase flows are summarized. The models include several kinds of equations, which cannot be discussed independently, such as the balance equations and the constitutive equations. A review of the various two-phase one-dimensional models proposed to date, and of the constitutive equations they imply, is made. These models are either mixture models or two-fluid models. Due to their potentialities, the two-fluid models are discussed in more detail. To avoid contradictions, the form of the constitutive equations involved in two-fluid models must be sufficiently general. A special form of the two-fluid models, which has particular advantages, is proposed. It involves three mixture balance equations, three balance equations for slip and thermal non-equilibriums, and the necessary constitutive equations [fr

  15. Detailed numerical modeling of a linear parallel-plate Active Magnetic Regenerator

    DEFF Research Database (Denmark)

    Nielsen, Kaspar Kirstein; Bahl, Christian Robert Haffenden; Smith, Anders

    2009-01-01

    A numerical model simulating Active Magnetic Regeneration (AMR) is presented and compared to a selection of experiments. The model is an extension and re-implementation of a previous two-dimensional model. The new model is extended to 2.5D, meaning that parasitic thermal losses are included in th...

  16. Cupola modeling research: Phase 2 (Year one), Final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-11-20

    Objective was to develop a mathematical model of the cupola furnace (cast iron production) in on-line and off-line process control and optimization. In Phase I, the general structure of the heat transfer, fluid flow, and chemical models were laid out, providing reasonable descriptions of cupola behavior with a one-dimensional representation. Work was also initiated on a two-dimensional model. Phase II was focused on perfecting the one-dimensional model. The contributions include these from MIT, Michigan University, and GM.

  17. Phase Separation of Superconducting Phases in the Penson-Kolb-Hubbard Model

    Science.gov (United States)

    Jerzy Kapcia, Konrad; Czart, Wojciech Robert; Ptok, Andrzej

    2016-04-01

    In this paper, we determine the phase diagrams (for T = 0 as well as T > 0) of the Penson-Kolb-Hubbard model for two dimensional square lattice within Hartree-Fock mean-field theory focusing on an investigation of superconducting phases and on a possibility of the occurrence of the phase separation. We obtain that the phase separation, which is a state of coexistence of two different superconducting phases (with s- and η-wave symmetries), occurs in definite ranges of the electron concentration. In addition, increasing temperature can change the symmetry of the superconducting order parameter (from η-wave into s-wave). The system considered exhibits also an interesting multicritical behaviour including bicritical points. The relevance of the results to experiments for real materials is also discussed.

  18. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  19. Mathematical modeling of disperse two-phase flows

    CERN Document Server

    Morel, Christophe

    2015-01-01

    This book develops the theoretical foundations of disperse two-phase flows, which are characterized by the existence of bubbles, droplets or solid particles finely dispersed in a carrier fluid, which can be a liquid or a gas. Chapters clarify many difficult subjects, including modeling of the interfacial area concentration. Basic knowledge of the subjects treated in this book is essential to practitioners of Computational Fluid Dynamics for two-phase flows in a variety of industrial and environmental settings. The author provides a complete derivation of the basic equations, followed by more advanced subjects like turbulence equations for the two phases (continuous and disperse) and multi-size particulate flow modeling. As well as theoretical material, readers will discover chapters concerned with closure relations and numerical issues. Many physical models are presented, covering key subjects including heat and mass transfers between phases, interfacial forces and fluid particles coalescence and breakup, a...

  20. Mathematical model of two-phase flow in accelerator channel

    Directory of Open Access Journals (Sweden)

    О.Ф. Нікулін

    2010-01-01

    Full Text Available  The problem of  two-phase flow composed of energy-carrier phase (Newtonian liquid and solid fine-dispersed phase (particles in counter jet mill accelerator channel is considered. The mathematical model bases goes on the supposition that the phases interact with each other like independent substances by means of aerodynamics’ forces in conditions of adiabatic flow. The mathematical model in the form of system of differential equations of order 11 is represented. Derivations of equations by base physical principles for cross-section-averaged quantity are produced. The mathematical model can be used for estimation of any kinematic and thermodynamic flow characteristics for purposely parameters optimization problem solving and transfer functions determination, that take place in  counter jet mill accelerator channel design.