Process modeling and bottleneck mining in online peer-review systems.
Premchaiswadi, Wichian; Porouhan, Parham
2015-01-01
This paper is divided into three main parts. In the first part of the study, we captured, collected and formatted an event log describing the handling of reviews for proceedings of an international conference in Thailand. In the second part, we used several process mining techniques in order to discover process models, social, organizational, and hierarchical structures from the proceeding's event log. In the third part, we detected the deviations and bottlenecks of the peer review process by comparing the observed events (i.e., authentic dataset) with a pre-defined model (i.e., master map). Finally, we investigated the performance information as well as the total waiting time in order to improve the effectiveness and efficiency of the online submission and peer review system for the prospective conferences and seminars. Consequently, the main goals of the study were as follows: (1) to convert the collected event log into the appropriate format supported by process mining analysis tools, (2) to discover process models and to construct social networks based on the collected event log, and (3) to find deviations, discrepancies and bottlenecks between the collected event log and the master pre-defined model. The results showed that although each paper was initially sent to three different reviewers; it was not always possible to make a decision after the first round of reviewing; therefore, additional reviewers were invited. In total, all the accepted and rejected manuscripts were reviewed by an average of 3.9 and 3.2 expert reviewers, respectively. Moreover, obvious violations of the rules and regulations relating to careless or inappropriate peer review of a manuscript-committed by the editorial board and other staff-were identified. Nine blocks of activity in the authentic dataset were not completely compatible with the activities defined in the master model. Also, five of the activity traces were not correctly enabled, and seven activities were missed within the
Development and bottlenecks of renewable electricity generation in China: a critical review.
Hu, Yuanan; Cheng, Hefa
2013-04-02
This review provides an overview on the development and status of electricity generation from renewable energy sources, namely hydropower, wind power, solar power, biomass energy, and geothermal energy, and discusses the technology, policy, and finance bottlenecks limiting growth of the renewable energy industry in China. Renewable energy, dominated by hydropower, currently accounts for more than 25% of the total electricity generation capacity. China is the world's largest generator of both hydropower and wind power, and also the largest manufacturer and exporter of photovoltaic cells. Electricity production from solar and biomass energy is at the early stages of development in China, while geothermal power generation has received little attention recently. The spatial mismatch in renewable energy supply and electricity demand requires construction of long-distance transmission networks, while the intermittence of renewable energy poses significant technical problems for feeding the generated electricity into the power grid. Besides greater investment in research and technology development, effective policies and financial measures should also be developed and improved to better support the healthy and sustained growth of renewable electricity generation. Meanwhile, attention should be paid to the potential impacts on the local environment from renewable energy development, despite the wider benefits for climate change.
Energy Technology Data Exchange (ETDEWEB)
NONE
2010-09-15
Many of the vulnerabilities to Energy Access, Energy Security, and Environmental Sustainability result from impediments to reaching a global demand-supply balance, as well as local balances, for various energy sources and carriers. Vulnerabilities result from multiple reasons: regional imbalances of energy production and consumption, the bulky character of the majority of energy fuels, the virtual necessity of electricity consumption following its production, among others. To detect and prioritize respective 'bottlenecks' across energy carriers, they have to be measured. In this report, production, consumption, exports, and imports were measured across all major energy carriers for seven key regions of the world for three time frames-2008, 2020, and 2050. Imbalances between production and consumption form bottlenecks in each region.
Directory of Open Access Journals (Sweden)
Sugiarto Sugiarto
2015-08-01
Full Text Available The term of capacity is very useful to quantify the ability of transport facilities in terms of carrying traffic. The capacity of the road is an essential ingredient in the planning, design, and operation of roadways. It is desirable for traffic analyst to be able to predict the time and places where congestion will occur and the volumes to be expected. Most of urbanized areas have been experiencing of traffic congestion problems particularly at urban arterial systems. High traffic demand and limited supply of roadways are always the main factors produced traffic congestion. However, there are other sources of local and temporal congestion, such as uncontrolled access point, median opening and on-street parking activities, which are caused a reduction of roadway capacity during peak operations. Those locations could result in reduction of travel speed and road, as known as hidden bottlenecks. This is bottleneck which is without any changes in geometric of the segments. The Indonesian Highway Capacity Manual (IHCM, 1997 is used to assess urban arterial systems till current days. IHCM provides a static method for examining the capacityand does not systematically take into account of bottleneck activities. However, bottleneck activities create interruption smooth traffic flow along arterial streets, which in turns stimulate related problems, such as, excessive air pollution, additional energy consumption and driver’s frustration due to traffic jammed. This condition could happen simultaneously; mostly repetitive and predictable in same peak hour demands. Therefore, this paper carefully summarize on the existing methodologies considering required data, handled data processing and expected output of each proposed of analysis. We further notice that dynamic approach could be more appropriated for analyzing temporal congestion segments (median opening, on street parking, etc.. Method of oblique cumulative plot seems to be more applicable in terms of
Differential equations problem solver
Arterburn, David R
2012-01-01
REA's Problem Solvers is a series of useful, practical, and informative study guides. Each title in the series is complete step-by-step solution guide. The Differential Equations Problem Solver enables students to solve difficult problems by showing them step-by-step solutions to Differential Equations problems. The Problem Solvers cover material ranging from the elementary to the advanced and make excellent review books and textbook companions. They're perfect for undergraduate and graduate studies.The Differential Equations Problem Solver is the perfect resource for any class, any exam, and
Traffic behavior at freeway bottlenecks.
2014-09-01
This study examines traffic behavior in the vicinity of a freeway bottleneck, revisiting commonly held : assumptions and uncovering systematic biases that likely have distorted empirical studies of bottleneck : formation, capacity drop, and the funda...
Asynchronous Parallelization of a CFD Solver
Abdi, Daniel S.; Bitsuamlak, Girma T.
2015-01-01
The article of record as published may be found at http://dx.doi.org/10.1155/2015/295393 A Navier-Stokes equations solver is parallelized to run on a cluster of computers using the domain decomposition method. Two approaches of communication and computation are investigated, namely, synchronous and asynchronous methods. Asynchronous communication between subdomains is not commonly used inCFDcodes; however, it has a potential to alleviate scaling bottlenecks incurred due to process...
Advances and bottlenecks in microbial hydrogen production.
Stephen, Alan J; Archer, Sophie A; Orozco, Rafael L; Macaskie, Lynne E
2017-09-01
Biological production of hydrogen is poised to become a significant player in the future energy mix. This review highlights recent advances and bottlenecks in various approaches to biohydrogen processes, often in concert with management of organic wastes or waste CO 2 . Some key bottlenecks are highlighted in terms of the overall energy balance of the process and highlighting the need for economic and environmental life cycle analyses with regard also to socio-economic and geographical issues. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Electric circuits problem solver
REA, Editors of
2012-01-01
Each Problem Solver is an insightful and essential study and solution guide chock-full of clear, concise problem-solving gems. All your questions can be found in one convenient source from one of the most trusted names in reference solution guides. More useful, more practical, and more informative, these study aids are the best review books and textbook companions available. Nothing remotely as comprehensive or as helpful exists in their subject anywhere. Perfect for undergraduate and graduate studies.Here in this highly useful reference is the finest overview of electric circuits currently av
Advanced calculus problem solver
REA, Editors of
2012-01-01
Each Problem Solver is an insightful and essential study and solution guide chock-full of clear, concise problem-solving gems. All your questions can be found in one convenient source from one of the most trusted names in reference solution guides. More useful, more practical, and more informative, these study aids are the best review books and textbook companions available. Nothing remotely as comprehensive or as helpful exists in their subject anywhere. Perfect for undergraduate and graduate studies.Here in this highly useful reference is the finest overview of advanced calculus currently av
The Armys Bandwidth Bottleneck
2003-08-01
representation requires a minimum of eight bits of information per pixel. The cinematic illusion of movement requires about 32 frames per second.3 In...Information Theory, vol. 46, no. 2 ( March 2000), pp. 388-404. 3. The development and adoption of new methods—including so- called dynamic protocols—for...Delaney, “Independent Review of Technology Maturity Assessment for Future Combat Systems Increment 1” ( March 3, 2003). The study was commissioned by the
Advanced Algebraic Multigrid Solvers for Subsurface Flow Simulation
Chen, Meng-Huo
2015-09-13
In this research we are particularly interested in extending the robustness of multigrid solvers to encounter complex systems related to subsurface reservoir applications for flow problems in porous media. In many cases, the step for solving the pressure filed in subsurface flow simulation becomes a bottleneck for the performance of the simulator. For solving large sparse linear system arising from MPFA discretization, we choose multigrid methods as the linear solver. The possible difficulties and issues will be addressed and the corresponding remedies will be studied. As the multigrid methods are used as the linear solver, the simulator can be parallelized (although not trivial) and the high-resolution simulation become feasible, the ultimately goal which we desire to achieve.
Mitigating SDN controller performance bottlenecks
DEFF Research Database (Denmark)
Caba, Cosmin Marius; Soler, José
2015-01-01
The centralization of the control plane decision logic in Software Defined Networking (SDN) has raised concerns regarding the performance of the SDN Controller (SDNC) when the network scales up. A number of solutions have been proposed in the literature to address these concerns. This paper...... proposes a new approach for addressing the performance bottlenecks that arise from limited computational resources at the SDNC. The proposed approach is based on optimally configuring the operating parameters of the components residing inside the SDNC (network control functions such as monitoring, routing...
Brouwer-Janse, M.D.
1991-01-01
Most formal problem-solving studies use verbal protocol and observational data of problem solvers working on a task. In user-centred product-design projects, observational studies of users are frequently used too. In the latter case, however, systematic control of conditions, indepth analysis and
Parallel linear solvers for simulations of reactor thermal hydraulics
International Nuclear Information System (INIS)
Yan, Y.; Antal, S.P.; Edge, B.; Keyes, D.E.; Shaver, D.; Bolotnov, I.A.; Podowski, M.Z.
2011-01-01
The state-of-the-art multiphase fluid dynamics code, NPHASE-CMFD, performs multiphase flow simulations in complex domains using implicit nonlinear treatment of the governing equations and in parallel, which is a very challenging environment for the linear solver. The present work illustrates how the Portable, Extensible Toolkit for Scientific Computation (PETSc) and scalable Algebraic Multigrid (AMG) preconditioner from Hypre can be utilized to construct robust and scalable linear solvers for the Newton correction equation obtained from the discretized system of governing conservation equations in NPHASE-CMFD. The overall long-tem objective of this work is to extend the NPHASE-CMFD code into a fully-scalable solver of multiphase flow and heat transfer problems, applicable to both steady-state and stiff time-dependent phenomena in complete fuel assemblies of nuclear reactors and, eventually, the entire reactor core (such as the Virtual Reactor concept envisioned by CASL). This campaign appropriately begins with the linear algebraic equation solver, which is traditionally a bottleneck to scalability in PDE-based codes. The computational complexity of the solver is usually superlinear in problem size, whereas the rest of the code, the “physics” portion, usually has its complexity linear in the problem size. (author)
High performance simplex solver
Huangfu, Qi
2013-01-01
The dual simplex method is frequently the most efficient technique for solving linear programming (LP) problems. This thesis describes an efficient implementation of the sequential dual simplex method and the design and development of two parallel dual simplex solvers. In serial, many advanced techniques for the (dual) simplex method are implemented, including sparse LU factorization, hyper-sparse linear system solution technique, efficient approaches to updating LU factors and...
Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs
Mawson, Mark J.; Revell, Alistair J.
2014-10-01
The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well suited to an efficient implementation for massively parallel computing, due to the prevalence of local operations in the algorithm. This paper presents and analyses the performance of a 3D lattice Boltzmann solver, optimized for third generation nVidia GPU hardware, also known as 'Kepler'. We provide a review of previous optimization strategies and analyse data read/write times for different memory types. In LBM, the time propagation step (known as streaming), involves shifting data to adjacent locations and is central to parallel performance; here we examine three approaches which make use of different hardware options. Two of which make use of 'performance enhancing' features of the GPU; shared memory and the new shuffle instruction found in Kepler based GPUs. These are compared to a standard transfer of data which relies instead on optimized storage to increase coalesced access. It is shown that the more simple approach is most efficient; since the need for large numbers of registers per thread in LBM limits the block size and thus the efficiency of these special features is reduced. Detailed results are obtained for a D3Q19 LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter case the use of a read-only data cache is explored, and peak performance of over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The appearance of a periodic bottleneck in the solver performance is also reported, believed to be hardware related; spikes in iteration-time occur with a frequency of around 11 Hz for both GPUs, independent of the size of the problem.
Cloud Technology May Widen Genomic Bottleneck - TCGA
Computational biologist Dr. Ilya Shmulevich suggests that renting cloud computing power might widen the bottleneck for analyzing genomic data. Learn more about his experience with the Cloud in this TCGA in Action Case Study.
Anticipation Behavior Upstream of a Bottleneck
Duives, D.C.; Daamen, W.; Hoogendoorn, S.P.
2014-01-01
Whether pedestrian movements do or do not follow similar patterns as vehicular traffic while experiencing congestion is not entirely understood. Using data gathered during bottleneck experiments under laboratory conditions, the phenomenon of anticipation before entering congestion is studied. This
Chemical Mechanism Solvers in Air Quality Models
Directory of Open Access Journals (Sweden)
John C. Linford
2011-09-01
Full Text Available The solution of chemical kinetics is one of the most computationally intensivetasks in atmospheric chemical transport simulations. Due to the stiff nature of the system,implicit time stepping algorithms which repeatedly solve linear systems of equations arenecessary. This paper reviews the issues and challenges associated with the construction ofefficient chemical solvers, discusses several families of algorithms, presents strategies forincreasing computational efficiency, and gives insight into implementing chemical solverson accelerated computer architectures.
Sherlock Holmes, Master Problem Solver.
Ballew, Hunter
1994-01-01
Shows the connections between Sherlock Holmes's investigative methods and mathematical problem solving, including observations, characteristics of the problem solver, importance of data, questioning the obvious, learning from experience, learning from errors, and indirect proof. (MKR)
A Survey of Solver-Related Geometry and Meshing Issues
Masters, James; Daniel, Derick; Gudenkauf, Jared; Hine, David; Sideroff, Chris
2016-01-01
There is a concern in the computational fluid dynamics community that mesh generation is a significant bottleneck in the CFD workflow. This is one of several papers that will help set the stage for a moderated panel discussion addressing this issue. Although certain general "rules of thumb" and a priori mesh metrics can be used to ensure that some base level of mesh quality is achieved, inadequate consideration is often given to the type of solver or particular flow regime on which the mesh will be utilized. This paper explores how an analyst may want to think differently about a mesh based on considerations such as if a flow is compressible vs. incompressible or hypersonic vs. subsonic or if the solver is node-centered vs. cell-centered. This paper is a high-level investigation intended to provide general insight into how considering the nature of the solver or flow when performing mesh generation has the potential to increase the accuracy and/or robustness of the solution and drive the mesh generation process to a state where it is no longer a hindrance to the analysis process.
The Plasmodium bottleneck: malaria parasite losses in the mosquito vector
Smith, Ryan C; Vega-Rodríguez, Joel; Jacobs-Lorena, Marcelo
2014-01-01
Nearly one million people are killed every year by the malaria parasite Plasmodium. Although the disease-causing forms of the parasite exist only in the human blood, mosquitoes of the genus Anopheles are the obligate vector for transmission. Here, we review the parasite life cycle in the vector and highlight the human and mosquito contributions that limit malaria parasite development in the mosquito host. We address parasite killing in its mosquito host and bottlenecks in parasite numbers that might guide intervention strategies to prevent transmission. PMID:25185005
The Plasmodium bottleneck: malaria parasite losses in the mosquito vector
Directory of Open Access Journals (Sweden)
Ryan C Smith
2014-08-01
Full Text Available Nearly one million people are killed every year by the malaria parasite Plasmodium. Although the disease-causing forms of the parasite exist only in the human blood, mosquitoes of the genus Anopheles are the obligate vector for transmission. Here, we review the parasite life cycle in the vector and highlight the human and mosquito contributions that limit malaria parasite development in the mosquito host. We address parasite killing in its mosquito host and bottlenecks in parasite numbers that might guide intervention strategies to prevent transmission.
Modern solvers for Helmholtz problems
Tang, Jok; Vuik, Kees
2017-01-01
This edited volume offers a state of the art overview of fast and robust solvers for the Helmholtz equation. The book consists of three parts: new developments and analysis in Helmholtz solvers, practical methods and implementations of Helmholtz solvers, and industrial applications. The Helmholtz equation appears in a wide range of science and engineering disciplines in which wave propagation is modeled. Examples are: seismic inversion, ultrasone medical imaging, sonar detection of submarines, waves in harbours and many more. The partial differential equation looks simple but is hard to solve. In order to approximate the solution of the problem numerical methods are needed. First a discretization is done. Various methods can be used: (high order) Finite Difference Method, Finite Element Method, Discontinuous Galerkin Method and Boundary Element Method. The resulting linear system is large, where the size of the problem increases with increasing frequency. Due to higher frequencies the seismic images need to b...
Metaheuristics progress as real problem solvers
Nonobe, Koji; Yagiura, Mutsunori
2005-01-01
Metaheuristics: Progress as Real Problem Solvers is a peer-reviewed volume of eighteen current, cutting-edge papers by leading researchers in the field. Included are an invited paper by F. Glover and G. Kochenberger, which discusses the concept of Metaheuristic agent processes, and a tutorial paper by M.G.C. Resende and C.C. Ribeiro discussing GRASP with path-relinking. Other papers discuss problem-solving approaches to timetabling, automated planograms, elevators, space allocation, shift design, cutting stock, flexible shop scheduling, colorectal cancer and cartography. A final group of methodology papers clarify various aspects of Metaheuristics from the computational view point.
Self-correcting Multigrid Solver
International Nuclear Information System (INIS)
Lewandowski, Jerome L.V.
2004-01-01
A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work
Kangaroo mother care: a multi-country analysis of health system bottlenecks and potential solutions.
Vesel, Linda; Bergh, Anne-Marie; Kerber, Kate J; Valsangkar, Bina; Mazia, Goldy; Moxon, Sarah G; Blencowe, Hannah; Darmstadt, Gary L; de Graft Johnson, Joseph; Dickson, Kim E; Ruiz Peláez, Juan; von Xylander, Severin; Lawn, Joy E
2015-01-01
Preterm birth is now the leading cause of under-five child deaths worldwide with one million direct deaths plus approximately another million where preterm is a risk factor for neonatal deaths due to other causes. There is strong evidence that kangaroo mother care (KMC) reduces mortality among babies with birth weight Asia as part of the Every Newborn Action Plan process. Country workshops involved technical experts to complete the survey tool, which is designed to synthesise and grade health system "bottlenecks", factors that hinder the scale-up, of maternal-newborn intervention packages. We used quantitative and qualitative methods to analyse the bottleneck data, combined with literature review, to present priority bottlenecks and actions relevant to different health system building blocks for KMC. Marked differences were found in the perceived severity of health system bottlenecks between Asian and African countries, with the former reporting more significant or very major bottlenecks for KMC with respect to all the health system building blocks. Community ownership and health financing bottlenecks were significant or very major bottlenecks for KMC in both low and high mortality contexts, particularly in South Asia. Significant bottlenecks were also reported for leadership and governance and health workforce building blocks. There are at least a dozen countries worldwide with national KMC programmes, and we identify three pathways to scale: (1) champion-led; (2) project-initiated; and (3) health systems designed. The combination of all three pathways may lead to more rapid scale-up. KMC has the potential to save lives, and change the face of facility-based newborn care, whilst empowering women to care for their preterm newborns.
Reliability of genetic bottleneck tests for detecting recent population declines
Peery, M. Zachariah; Kirby, Rebecca; Reid, Brendan N.; Stoelting, Ricka; Doucet-Beer, Elena; Robinson, Stacie; Vasquez-Carrillo, Catalina; Pauli, Jonathan N.; Palsboll, Per J.
The identification of population bottlenecks is critical in conservation because populations that have experienced significant reductions in abundance are subject to a variety of genetic and demographic processes that can hasten extinction. Genetic bottleneck tests constitute an appealing and
Creativity for Problem Solvers
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
2009-01-01
This paper presents some modern and interdisciplinary concepts about creativity and creative processes specially related to problem solving. Central publications related to the theme are briefly reviewed. Creative tools and approaches suitable to support problem solving are also presented. Finally......, the paper outlines the author’s experiences using creative tools and approaches to: Facilitation of problem solving processes, strategy development in organisations, design of optimisation systems for large scale and complex logistic systems, and creative design of software optimisation for complex non...
Bottlenecks to coral recovery in the Seychelles
Chong-Seng, K. M.; Graham, N. A. J.; Pratchett, M. S.
2014-06-01
Processes that affect recovery of coral assemblages require investigation because coral reefs are experiencing a diverse array of more frequent disturbances. Potential bottlenecks to coral recovery include limited larval supply, low rates of settlement, and high mortality of new recruits or juvenile corals. We investigated spatial variation in local abundance of scleractinian corals in the Seychelles at three distinct life history stages (recruits, juveniles, and adults) on reefs with differing benthic conditions. Following widespread coral loss due to the 1998 bleaching event, some reefs are recovering (i.e., relatively high scleractinian coral cover: `coral-dominated'), some reefs have low cover of living macrobenthos and unconsolidated rubble substrates (`rubble-dominated'), and some reefs have high cover of macroalgae (`macroalgal-dominated'). Rates of coral recruitment to artificial settlement tiles were similar across all reef conditions, suggesting that larval supply does not explain differential coral recovery across the three reef types. However, acroporid recruits were absent on macroalgal-dominated reefs (0.0 ± 0.0 recruits tile-1) in comparison to coral-dominated reefs (5.2 ± 1.6 recruits tile-1). Juvenile coral colony density was significantly lower on macroalgal-dominated reefs (2.4 ± 1.1 colonies m-2), compared to coral-dominated reefs (16.8 ± 2.4 m-2) and rubble-dominated reefs (33.1 ± 7.3 m-2), suggesting that macroalgal-dominated reefs have either a bottleneck to successful settlement on the natural substrates or a high post-settlement mortality bottleneck. Rubble-dominated reefs had very low cover of adult corals (10.0 ± 1.7 %) compared to coral-dominated reefs (33.4 ± 3.6 %) despite no statistical difference in their juvenile coral densities. A bottleneck caused by low juvenile colony survivorship on unconsolidated rubble-dominated reefs is possible, or alternatively, recruitment to rubble-dominated reefs has only recently begun. This
Iterative solvers in forming process simulations
van den Boogaard, Antonius H.; Rietman, Bert; Huetink, Han
1998-01-01
The use of iterative solvers in implicit forming process simulations is studied. The time and memory requirements are compared with direct solvers and assessed in relation with the rest of the Newton-Raphson iteration process. It is shown that conjugate gradient{like solvers with a proper
A theory of traffic congestion at moving bottlenecks
Energy Technology Data Exchange (ETDEWEB)
Kerner, Boris S [Daimler AG, GR/PTF, HPC: G021, 71059 Sindelfingen (Germany); Klenov, Sergey L, E-mail: boris.kerner@daimler.co [Department of Physics, Moscow Institute of Physics and Technology, 141700 Dolgoprudny, Moscow Region (Russian Federation)
2010-10-22
The physics of traffic congestion occurring at a moving bottleneck on a multi-lane road is revealed based on the numerical analyses of vehicular traffic with a discrete stochastic traffic flow model in the framework of three-phase traffic theory. We find that there is a critical speed of a moving bottleneck at which traffic breakdown, i.e. a first-order phase transition from free flow to synchronized flow, occurs spontaneously at the moving bottleneck, if the flow rate upstream of the bottleneck is great enough. The greater the flow rate, the higher the critical speed of the moving bottleneck. A diagram of congested traffic patterns at the moving bottleneck is found, which shows regions in the flow-rate-moving-bottleneck-speed plane in which congested patterns emerge spontaneously or can be induced through large enough disturbances in an initial free flow. A comparison of features of traffic breakdown and resulting congested patterns at the moving bottleneck with known ones at an on-ramp (and other motionless) bottleneck is made. Nonlinear features of complex interactions and transformations of congested traffic patterns occurring at on- and off-ramp bottlenecks due to the existence of the moving bottleneck are found. The physics of the phenomenon of traffic congestion due to 'elephant racing' on a multi-lane road is revealed.
A theory of traffic congestion at moving bottlenecks
International Nuclear Information System (INIS)
Kerner, Boris S; Klenov, Sergey L
2010-01-01
The physics of traffic congestion occurring at a moving bottleneck on a multi-lane road is revealed based on the numerical analyses of vehicular traffic with a discrete stochastic traffic flow model in the framework of three-phase traffic theory. We find that there is a critical speed of a moving bottleneck at which traffic breakdown, i.e. a first-order phase transition from free flow to synchronized flow, occurs spontaneously at the moving bottleneck, if the flow rate upstream of the bottleneck is great enough. The greater the flow rate, the higher the critical speed of the moving bottleneck. A diagram of congested traffic patterns at the moving bottleneck is found, which shows regions in the flow-rate-moving-bottleneck-speed plane in which congested patterns emerge spontaneously or can be induced through large enough disturbances in an initial free flow. A comparison of features of traffic breakdown and resulting congested patterns at the moving bottleneck with known ones at an on-ramp (and other motionless) bottleneck is made. Nonlinear features of complex interactions and transformations of congested traffic patterns occurring at on- and off-ramp bottlenecks due to the existence of the moving bottleneck are found. The physics of the phenomenon of traffic congestion due to 'elephant racing' on a multi-lane road is revealed.
A generalized gyrokinetic Poisson solver
International Nuclear Information System (INIS)
Lin, Z.; Lee, W.W.
1995-03-01
A generalized gyrokinetic Poisson solver has been developed, which employs local operations in the configuration space to compute the polarization density response. The new technique is based on the actual physical process of gyrophase-averaging. It is useful for nonlocal simulations using general geometry equilibrium. Since it utilizes local operations rather than the global ones such as FFT, the new method is most amenable to massively parallel algorithms
Situation awareness of active distribution network: roadmap, technologies, and bottlenecks
DEFF Research Database (Denmark)
Lin, Jin; Wan, Can; Song, Yonghua
2016-01-01
With the rapid development of local generation and demand response, the active distribution network (ADN), which aggregates and manages miscellaneous distributed resources, has moved from theory to practice. Secure and optimal operations now require an advanced situation awareness (SA) system so...... in the project of developing an SA system as the basic component of a practical active distribution management system (ADMS) deployed in Beijing, China, is presented. This paper reviews the ADN’s development roadmap by illustrating the changes that are made in elements, topology, structure, and control scheme....... Taking into consideration these hardware changes, a systematic framework is proposed for the main components and the functional hierarchy of an SA system for the ADN. The SA system’s implementation bottlenecks are also presented, including, but not limited to issues in big data platform, distribution...
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
A Family of High-Performance Solvers for Linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp
2014-01-01
In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......-direction in Interior-Point (IP) methods for MPC, and that usually are the computational bottle-neck. This strategy combines a Riccati-like solver with the use of high-performance computing techniques: in particular, in this paper we explore the performance boost given by the use of single precision computation...
Fast Multipole-Based Elliptic PDE Solver and Preconditioner
Ibeid, Huda
2016-12-07
Exascale systems are predicted to have approximately one billion cores, assuming Gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle-based methods in astrophysics and molecular dynamics. FMM is more than an N-body solver, however. Recent efforts to view the FMM as an elliptic PDE solver have opened the possibility to use it as a preconditioner for even a broader range of applications. In this thesis, we (i) discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on inter-node communication, and develop a performance model that considers the communication patterns of the FMM for spatially quasi-uniform distributions, (ii) employ this performance model to guide performance and scaling improvement of FMM for all-atom molecular dynamics simulations of uniformly distributed particles, and (iii) demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Compared with multilevel methods, FMM is capable of comparable algebraic convergence rates down to the truncation error of the discretized PDE, and it has superior multicore and distributed memory scalability properties on commodity
Budget process bottlenecks for immunization financing in the Democratic Republic of the Congo (DRC).
Le Gargasson, Jean-Bernard; Mibulumukini, Benoît; Gessner, Bradford D; Colombini, Anaïs
2014-02-19
In Democratic Republic of the Congo (DRC), the availability of domestic resources for the immunization program is limited and relies mostly on external donor support. DRC has introduced a series of reforms to move the country toward performance-based management and program budgets. The objectives of the study were to: (i) describe the budget process norm, (ii) analyze the budget process in practice and associated bottlenecks at each of its phases, and (iii) collect suggestions made by the actors involved to improve the situation. Quantitative and qualitative data were collected through: a review of published and gray literature, and individual interviews. Bottlenecks in the budget process and disbursement of funds for immunization are one of the causes of limited domestic resources for the program. Critical bottlenecks include: excessive use of off-budget procedures; limited human resources and capacity; lack of motivation; interference from ministries with the standard budget process; dependency toward the development partner's disbursements schedule; and lack of budget implementation tracking. Results show that the health sector's mobilization rate was 59% in 2011. For the credit line specific to immunization program activities, the mobilization rate for the national Expanded Program for Immunization (EPI) was 26% in 2011 and 43% for vaccines (2010). The main bottleneck for the EPI budget line (2011) and vaccine budget line (2011) occurs at the authorization phase. Budget process bottlenecks identified in the analysis lead to a low mobilization rate for the immunization program. The bottlenecks identified show that a poor flow of funds causes an insufficient percentage of already allocated resources to reach various health system levels. Copyright © 2014 Elsevier Ltd. All rights reserved.
Parallel time domain solvers for electrically large transient scattering problems
Liu, Yang
2014-09-26
Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.
Deep Complementary Bottleneck Features for Visual Speech Recognition
Petridis, Stavros; Pantic, Maja
Deep bottleneck features (DBNFs) have been used successfully in the past for acoustic speech recognition from audio. However, research on extracting DBNFs for visual speech recognition is very limited. In this work, we present an approach to extract deep bottleneck visual features based on deep
Test set for initial value problem solvers
W.M. Lioen (Walter); J.J.B. de Swart (Jacques)
1998-01-01
textabstractThe CWI test set for IVP solvers presents a collection of Initial Value Problems to test solvers for implicit differential equations. This test set can both decrease the effort for the code developer to test his software in a reliable way, and cross the bridge between the application
Bottleneck in secretion of α-amylase in Bacillus subtilis.
Yan, Shaomin; Wu, Guang
2017-07-19
Amylase plays an important role in biotechnology industries, and Gram-positive bacterium Bacillus subtilis is a major host to produce heterogeneous α-amylases. However, the secretion stress limits the high yield of α-amylase in B. subtilis although huge efforts have been made to address this secretion bottleneck. In this question-oriented review, every effort is made to answer the following questions, which look simple but are long-standing, through reviewing of literature: (1) Does α-amylase need a specific and dedicated chaperone? (2) What signal sequence does CsaA recognize? (3) Does CsaA require ATP for its operation? (4) Does an unfolded α-amylase is less soluble than a folded one? (5) Does α-amylase aggregate before transporting through Sec secretion system? (6) Is α-amylase sufficient stable to prevent itself from misfolding? (7) Does α-amylase need more disulfide bonds to be stabilized? (8) Which secretion system does PrsA pass through? (9) Is PrsA ATP-dependent? (10) Is PrsA reused after folding of α-amylase? (11) What is the fate of PrsA? (12) Is trigger factor (TF) ATP-dependent? The literature review suggests that not only the most of those questions are still open to answers but also it is necessary to calculate ATP budget in order to better understand how B. subtilis uses its energy for production and secretion.
Phonon bottleneck identification in disordered nanoporous materials
Romano, Giuseppe; Grossman, Jeffrey C.
2017-09-01
Nanoporous materials are a promising platform for thermoelectrics in that they offer high thermal conductivity tunability while preserving good electrical properties, a crucial requirement for high-efficiency thermal energy conversion. Understanding the impact of the pore arrangement on thermal transport is pivotal to engineering realistic materials, where pore disorder is unavoidable. Although there has been considerable progress in modeling thermal size effects in nanostructures, it has remained a challenge to screen such materials over a large phase space due to the slow simulation time required for accurate results. We use density functional theory in connection with the Boltzmann transport equation to perform calculations of thermal conductivity in disordered porous materials. By leveraging graph theory and regressive analysis, we identify the set of pores representing the phonon bottleneck and obtain a descriptor for thermal transport, based on the sum of the pore-pore distances between such pores. This approach provide a simple tool to estimate phonon suppression in realistic porous materials for thermoelectric applications and enhance our understanding of heat transport in disordered materials.
A theory of traffic congestion at heavy bottlenecks
Energy Technology Data Exchange (ETDEWEB)
Kerner, Boris S [Daimler AG, GR/ETI, HPC: G021, 71059 Sindelfingen (Germany)
2008-05-30
Spatiotemporal features and physics of vehicular traffic congestion occurring due to heavy highway bottlenecks caused for example by bad weather conditions or accidents are found based on simulations in the framework of three-phase traffic theory. A model of a heavy bottleneck is presented. Under a continuous non-limited increase in bottleneck strength, i.e., when the average flow rate within a congested pattern allowed by the heavy bottleneck decreases continuously up to zero, the evolution of the traffic phases in congested traffic, synchronized flow and wide moving jams, is studied. It is found that at a small enough flow rate within the congested pattern, the pattern exhibits a non-regular structure: a pinch region of synchronized flow within the pattern disappears and appears randomly over time; wide moving jams upstream of the pinch region exhibit a complex non-regular dynamics in which the jams appear and disappear randomly. At greater bottleneck strengths, wide moving jams merge onto a mega-wide moving jam (mega-jam) within which low-speed patterns with a complex non-regular spatiotemporal dynamics occur. We show that when the bottleneck strength is great enough, only the mega-jam survives and synchronized flow remains only within its downstream front separating free flow and congested traffic. Theoretical results presented can explain why no sequence of wide moving jams can often be distinguished in non-homogeneous traffic congestion measured at very heavy bottlenecks caused by bad weather conditions or accidents.
A theory of traffic congestion at heavy bottlenecks
International Nuclear Information System (INIS)
Kerner, Boris S
2008-01-01
Spatiotemporal features and physics of vehicular traffic congestion occurring due to heavy highway bottlenecks caused for example by bad weather conditions or accidents are found based on simulations in the framework of three-phase traffic theory. A model of a heavy bottleneck is presented. Under a continuous non-limited increase in bottleneck strength, i.e., when the average flow rate within a congested pattern allowed by the heavy bottleneck decreases continuously up to zero, the evolution of the traffic phases in congested traffic, synchronized flow and wide moving jams, is studied. It is found that at a small enough flow rate within the congested pattern, the pattern exhibits a non-regular structure: a pinch region of synchronized flow within the pattern disappears and appears randomly over time; wide moving jams upstream of the pinch region exhibit a complex non-regular dynamics in which the jams appear and disappear randomly. At greater bottleneck strengths, wide moving jams merge onto a mega-wide moving jam (mega-jam) within which low-speed patterns with a complex non-regular spatiotemporal dynamics occur. We show that when the bottleneck strength is great enough, only the mega-jam survives and synchronized flow remains only within its downstream front separating free flow and congested traffic. Theoretical results presented can explain why no sequence of wide moving jams can often be distinguished in non-homogeneous traffic congestion measured at very heavy bottlenecks caused by bad weather conditions or accidents
Deep bottleneck features for spoken language identification.
Directory of Open Access Journals (Sweden)
Bing Jiang
Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
ALPS - A LINEAR PROGRAM SOLVER
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Low and Expensive Bandwidth Remains Key Bottleneck for ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... satellites in the orbit, aggressive deployment of broadband wireless technology, ... Key Words: ISPs, Quality of Service, Internet Diffusion, energy supply, bottlenecks, ..... then the transmission time for an IP packet of size LL3 ...
Bottleneck management in the German and European electricity supply
International Nuclear Information System (INIS)
Koenig, Carsten
2013-01-01
This publication describes how bottlenecks in the German and European electricity supply pose a danger to the realization of the European internal market in electricity, the transition to electricity production from renewable resources and to the safeguarding of grid availability and security of supply. Bottlenecks at cross-border interconnectors between member states of the European Union are hampering cross-border trade in electricity, posing an impediment to EU-wide competition among electricity production and electricity trading companies. Grid bottlenecks at cross-border interconnectors isolate national markets from one another, with the result that it is not always possible in the European Union to have the most competitive power plant produce electricity. This amounts to a loss of welfare compared with what it would be in the case of an electricity supply without bottlenecks. Furthermore, bottlenecks make it impossible for green electricity that would be eligible for promotion for reasons of climate and environmental protection to be transmitted unimpeded from the most suitable site to the consumer regions. Thus the transmission of electricity produced from wind power in Northern Germany to the industrial centres in Southern Germany is impeded by bottlenecks along the north-south lines of the national transmission network. Today some of the German electricity supply networks already have to be operated near the limits of their capacity, especially during high wind episodes. This poses a growing danger to network availability and security of supply. Since the installation, expansion and conversion of electricity supply networks in Germany and other member states of the European Union is no longer progressing at the required speed, growing importance attaches to the management of bottlenecks. The goal of bottleneck management is to resolve conflicts over network use such as can occur in overload situations with as little discrimination and as little
ELSI: A unified software interface for Kohn-Sham electronic structure solvers
Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker
2018-01-01
Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.
Using SPARK as a Solver for Modelica
Energy Technology Data Exchange (ETDEWEB)
Wetter, Michael; Wetter, Michael; Haves, Philip; Moshier, Michael A.; Sowell, Edward F.
2008-06-30
Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulation environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.
New iterative solvers for the NAG Libraries
Energy Technology Data Exchange (ETDEWEB)
Salvini, S.; Shaw, G. [Numerical Algorithms Group Ltd., Oxford (United Kingdom)
1996-12-31
The purpose of this paper is to introduce the work which has been carried out at NAG Ltd to update the iterative solvers for sparse systems of linear equations, both symmetric and unsymmetric, in the NAG Fortran 77 Library. Our current plans to extend this work and include it in our other numerical libraries in our range are also briefly mentioned. We have added to the Library the new Chapter F11, entirely dedicated to sparse linear algebra. At Mark 17, the F11 Chapter includes sparse iterative solvers, preconditioners, utilities and black-box routines for sparse symmetric (both positive-definite and indefinite) linear systems. Mark 18 will add solvers, preconditioners, utilities and black-boxes for sparse unsymmetric systems: the development of these has already been completed.
Modeling Microbunching from Shot Noise Using Vlasov Solvers
International Nuclear Information System (INIS)
Venturini, Marco; Venturini, Marco; Zholents, Alexander
2008-01-01
Unlike macroparticle simulations, which are sensitive to unphysical statistical fluctuations when the number of macroparticles is smaller than the bunch population, direct methods for solving the Vlasov equation are free from sampling noise and are ideally suited for studying microbunching instabilities evolving from shot noise. We review a 2D (longitudinal dynamics) Vlasov solver we have recently developed to study the microbunching instability in the beam delivery systems for x-ray FELs and present an application to FERMI(at)Elettra. We discuss, in particular, the impact of the spreader design on microbunching
On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games
de Keijzer, Bart; Schäfer, Guido; Telelis, Orestis A.
We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.
Cafesat: A modern sat solver for scala
Blanc Régis
2013-01-01
We present CafeSat a SAT solver written in the Scala programming language. CafeSat is a modern solver based on DPLL and featuring many state of the art techniques and heuristics. It uses two watched literals for Boolean constraint propagation conict driven learning along with clause deletion a restarting strategy and the VSIDS heuristics for choosing the branching literal. CafeSat is both sound and complete. In order to achieve reasonable performance low level and hand tuned data structures a...
Benchmarking optimization solvers for structural topology optimization
DEFF Research Database (Denmark)
Rojas Labanda, Susana; Stolpe, Mathias
2015-01-01
solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...
On a construction of fast direct solvers
Czech Academy of Sciences Publication Activity Database
Práger, Milan
2003-01-01
Roč. 48, č. 3 (2003), s. 225-236 ISSN 0862-7940 Institutional research plan: CEZ:AV0Z1019905; CEZ:AV0Z1019905 Keywords : Poisson equation * boundary value problem * fast direct solver Subject RIV: BA - General Mathematics
DEFF Research Database (Denmark)
Bjørner, Nikolaj; Dung, Phan Anh; Fleckenstein, Lars
2015-01-01
vZ is a part of the SMT solver Z3. It allows users to pose and solve optimization problems modulo theories. Many SMT applications use models to provide satisfying assignments, and a growing number of these build on top of Z3 to get optimal assignments with respect to objective functions. vZ provi...
Database architecture optimized for the new bottleneck: Memory access
P.A. Boncz (Peter); S. Manegold (Stefan); M.L. Kersten (Martin)
1999-01-01
textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the
Optimizing Database Architecture for the New Bottleneck: Memory Access
S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)
2000-01-01
textabstractIn the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the
Bottlenecks reduction using superconductors in high voltage transmission lines
Directory of Open Access Journals (Sweden)
Daloub Labib
2016-01-01
Full Text Available Energy flow bottlenecks in high voltage transmission lines known as congestions are one of the challenges facing power utilities in fast developing countries. Bottlenecks occur in selected power lines when transmission systems are operated at or beyond their transfer limits. In these cases, congestions result in preventing new power supply contracts, infeasibility in existing contracts, price spike and market power abuse. The “Superconductor Technology” in electric power transmission cables has been used as a solution to solve the problem of bottlenecks in energy transmission at high voltage underground cables and overhead lines. The increase in demand on power generation and transmission happening due to fast development and linked to the intensive usage of transmission network in certain points, which in turn, lead to often frequent congestion in getting the required power across to where it is needed. In this paper, a bottleneck in high voltage double overhead transmission line with Aluminum Conductor Steel Reinforced was modeled using conductor parameters and replaced by Gap-Type Superconductor to assess the benefit of upgrading to higher temperature superconductor and obtain higher current carrying capacity. This proved to reduce the high loading of traditional aluminum conductors and allow more power transfer over the line using superconductor within the same existing right-of-way, steel towers, insulators and fittings, thus reducing the upgrade cost of building new lines.
On the inefficiency of equilibria in linear bottleneck congestion games
B. de Keijzer (Bart); G. Schäfer (Guido); O. Telelis (Orestis); S. Kontogiannis (Spyros); E. Koutsoupias (Elias); P.G. Spirakis (Paul)
2010-01-01
htmlabstractWe study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the
Give or take? Rewards versus charges for a congested bottleneck
Rouwendal, J.; Verhoef, E.T.; Knockaert, J.
2012-01-01
This paper analyzes the possibilities to relieve traffic congestion using subsidies instead of Pigouvian taxes, as well as revenue-neutral combinations of rewards and taxes ('feebates'). The model considers a Vickrey-ADL model of bottleneck congestion with endogenous scheduling. With inelastic
Low and Expensive Bandwidth Remains Key Bottleneck for ...
African Journals Online (AJOL)
These bottlenecks have dwarfed the expectations of the citizens to fully participate in the new world economic order galvanized by e-commerce and world trade. It is estimated that M.I.T in Boston USA has bandwidth allocation that surpasses all the bandwidth allocated to Nigeria put together. Low bandwidth has been found ...
Widening the Knowledge Acquisition Bottleneck for Constraint-Based Tutors
Suraweera, Pramuditha; Mitrovic, Antonija; Martin, Brent
2010-01-01
Intelligent Tutoring Systems (ITS) are effective tools for education. However, developing them is a labour-intensive and time-consuming process. A major share of the effort is devoted to acquiring the domain knowledge that underlies the system's intelligence. The goal of this research is to reduce this knowledge acquisition bottleneck and better…
Congestion in a city with a central bottleneck
DEFF Research Database (Denmark)
Fosgerau, Mogens; Palma, André de
2010-01-01
We consider dynamic congestion in an urban setting where trip origins are spatially distributed. All travelers must pass through a downtown bottleneck in order to reach their destination in the CBD. Each traveler chooses departure time to maximize general concave scheduling utility. At equilibriu...
Congestion in a city with a central bottleneck
DEFF Research Database (Denmark)
Fosgerau, Mogens; Palma, André de
2012-01-01
We consider dynamic congestion in an urban setting where trip origins are spatially distributed. All travelers must pass through a downtown bottleneck in order to reach their destination in the CBD. Each traveler chooses departure time to maximize general concave scheduling utility. We find that,...
The Case for a Gaian Bottleneck: The Biology of Habitability.
Chopra, Aditya; Lineweaver, Charles H
2016-01-01
The prerequisites and ingredients for life seem to be abundantly available in the Universe. However, the Universe does not seem to be teeming with life. The most common explanation for this is a low probability for the emergence of life (an emergence bottleneck), notionally due to the intricacies of the molecular recipe. Here, we present an alternative Gaian bottleneck explanation: If life emerges on a planet, it only rarely evolves quickly enough to regulate greenhouse gases and albedo, thereby maintaining surface temperatures compatible with liquid water and habitability. Such a Gaian bottleneck suggests that (i) extinction is the cosmic default for most life that has ever emerged on the surfaces of wet rocky planets in the Universe and (ii) rocky planets need to be inhabited to remain habitable. In the Gaian bottleneck model, the maintenance of planetary habitability is a property more associated with an unusually rapid evolution of biological regulation of surface volatiles than with the luminosity and distance to the host star.
Genetic diversity and bottleneck studies in the Marwari horse breed
Indian Academy of Sciences (India)
Genetic diversity within the Marwari breed of horses was evaluated using 26 different microsatellite pairs with 48 DNA samples from unrelated horses. This molecular characterisation was undertaken to evaluate the problem of genetic bottlenecks also, if any, in this breed. The estimated mean (± s.e.) allelic diversity was 5.9 ...
Genetic diversity and bottleneck studies in the Marwari horse breed
Indian Academy of Sciences (India)
Unknown
[Gupta A. K., Chauhan M., Tandon S. N. and Sonia 2005 Genetic diversity and bottleneck studies in the Marwari horse breed. J. Genet. 84, 295–301] ... developed to carry out studies of genetic variation (Brad- ley et al. 1996; Canon et al. ..... 1996 Mitochondrial diversity and the origins of African and. European cattle. Proc.
Bottlenecks in the diagnosis of hypochondriasis.
Schmidt, A J
1994-01-01
This review deals with diagnostic problems in DSM-III-R hypochondriasis. A first category of problems is directly connected with the definition of hypochondriasis. The following topics are discussed: the distinction between hypochondriasis and hypochondriacal attitude, the personality aspects of hypochondriasis, and the role of medical findings in the diagnosis. This is followed by a discussion of problems as to the distinction between hypochondriasis and related disorders. This concerns the status of hypochondriasis as a primary or secondary disorder in depression and the relationship with anxiety disorders (especially panic disorder and obsessive-compulsive disorder [OCD]) and the somatization disorder. The DSM-III-R classification of hypochondriasis as a somatoform disorder is disputed. A third category of problems lies in the measurement of hypochondriasis. The scope and quality of the most frequently used questionnaires for measuring hypochondriasis are poor. In research, on the basis of a single questionnaire and without due consideration of medical findings, the diagnosis of hypochondriasis is applied too soon. Finally, it is briefly indicated that the lack of diagnostic clarity affects the way in which the patient is approached in clinical practice.
Extending the Finite Domain Solver of GNU Prolog
Bloemen, Vincent; Diaz, Daniel; van der Bijl, Machiel; Abreu, Salvador; Ströder, Thomas; Swift, Terrance
This paper describes three significant extensions for the Finite Domain solver of GNU Prolog. First, the solver now supports negative integers. Second, the solver detects and prevents integer overflows from occurring. Third, the internal representation of sparse domains has been redesigned to
Directory of Open Access Journals (Sweden)
Rupani Mihir
2015-01-01
Full Text Available Introduction:Bottleneck Analysis and Strategic Planning exercise was carried out in 6 High Priority Districts (HPDs, under Call-to-Action for RMNCH+A strategy.Rationale: In spite of continued efforts, India is still lagging behind in its MDG goals.Objectives: To identify gaps in childhood diarrhea management and propose strategic options for the same.Materials and Methods: Bottleneck analysis exercisewas carried out based on the Tanahashi model, desk review and focused group discussions between district officials, front-line workers and UNICEF officials. These bottlenecks were pertaining to the availability, accessibility, utilization of services and quality of services being provided by the health department.Elaborating the Tanahashi model for the 6 HPDs, 94% of the front-line workers (FLWs had stock of Zinc-ORS; 88% FLWs were trained in diarrhea management; 98% villages had at least one FLW trained in diarrhea management; health care seeking for diarrhea cases was 17%; 5.1% diarrhea cases received Zinc-ORS from health worker and 2.4% care takers prepared Zinc-ORS in safe drinking water.Results: The major bottlenecks identified for Childhood Diarrhea management in the 6 High Priority Districts were poor demand generation, unsafe drinking water, poor access to improved sanitation facility and lack of equitable distribution of Zinc-ORS till the front-line worker level. The main strategic options that were suggested for relieving these bottlenecks were Zinc-ORS roll out in scale-up districts, develop IEC/BCC plan for childhood diarrhea management at state/district level, use of Drug Logistics Information Management System (DLIMS software for supply chain management of Zinc-ORS, strengthening of chlorination activity at household level, monitoring implementation of Nirmal Bharat Abhiyaan (NBA for constructing improved sanitation facilities at household level and to develop an IEC/BCC plan for hygiene promotion and usage of sanitary latrines
Directory of Open Access Journals (Sweden)
Rupani Mihir
2015-12-01
Full Text Available Introduction:Bottleneck Analysis and Strategic Planning exercise was carried out in 6 High Priority Districts (HPDs, under Call-to-Action for RMNCH+A strategy. Rationale: In spite of continued efforts, India is still lagging behind in its MDG goals. Objectives: To identify gaps in childhood diarrhea management and propose strategic options for the same. Materials and Methods: Bottleneck analysis exercisewas carried out based on the Tanahashi model, desk review and focused group discussions between district officials, front-line workers and UNICEF officials. These bottlenecks were pertaining to the availability, accessibility, utilization of services and quality of services being provided by the health department. Elaborating the Tanahashi model for the 6 HPDs, 94% of the front-line workers (FLWs had stock of Zinc-ORS; 88% FLWs were trained in diarrhea management; 98% villages had at least one FLW trained in diarrhea management; health care seeking for diarrhea cases was 17%; 5.1% diarrhea cases received Zinc-ORS from health worker and 2.4% care takers prepared Zinc-ORS in safe drinking water. Results: The major bottlenecks identified for Childhood Diarrhea management in the 6 High Priority Districts were poor demand generation, unsafe drinking water, poor access to improved sanitation facility and lack of equitable distribution of Zinc-ORS till the front-line worker level. The main strategic options that were suggested for relieving these bottlenecks were Zinc-ORS roll out in scale-up districts, develop IEC/BCC plan for childhood diarrhea management at state/district level, use of Drug Logistics Information Management System (DLIMS software for supply chain management of Zinc-ORS, strengthening of chlorination activity at household level, monitoring implementation of Nirmal Bharat Abhiyaan (NBA for constructing improved sanitation facilities at household level and to develop an IEC/BCC plan for hygiene promotion and usage of sanitary
GPU accelerated flow solver for direct numerical simulation of turbulent flows
Energy Technology Data Exchange (ETDEWEB)
Salvadore, Francesco [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy); Bernardini, Matteo, E-mail: matteo.bernardini@uniroma1.it [Department of Mechanical and Aerospace Engineering, University of Rome ‘La Sapienza’ – via Eudossiana 18, 00184 Rome (Italy); Botti, Michela [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy)
2013-02-15
Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier–Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.
Energy Technology Data Exchange (ETDEWEB)
Druinsky, A; Ghysels, P; Li, XS; Marques, O; Williams, S; Barker, A; Kalchev, D; Vassilevski, P
2016-04-02
In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism and made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.
Selectivity of fish ladders: a bottleneck in Neotropical fish movement
Directory of Open Access Journals (Sweden)
Carlos Sérgio Agostinho
their proportions in the downriver stretch: fish samples in the ladder were clearly dominated by a few species, including some that do not need to be translocated. Thus, selectivity constitutes an important bottleneck to initiatives for translocating fish aimed at conserving their stocks or biodiversity. It is urgent to review the decision-making process for the construction of fish passages and to evaluate the functioning of those already operating.
Fostering Creative Problem Solvers in Higher Education
DEFF Research Database (Denmark)
Zhou, Chunfang
2016-01-01
to meet such challenges. This chapter aims to illustrate how to understand: 1) complexity as the nature of professional practice; 2) creative problem solving as the core skill in professional practice; 3) creativity as interplay between persons and their environment; 4) higher education as the context......Recent studies have emphasized issues of social emergence based on thinking of societies as complex systems. The complexity of professional practice has been recognized as the root of challenges for higher education. To foster creative problem solvers is a key response of higher education in order...... of fostering creative problem solvers; and 5) some innovative strategies such as Problem-Based Learning (PBL) and building a learning environment by Information Communication Technology (ICT) as potential strategies of creativity development. Accordingly, this chapter contributes to bridge the complexity...
Mathematical programming solver based on local search
Gardi, Frédéric; Darlay, Julien; Estellon, Bertrand; Megel, Romain
2014-01-01
This book covers local search for combinatorial optimization and its extension to mixed-variable optimization. Although not yet understood from the theoretical point of view, local search is the paradigm of choice for tackling large-scale real-life optimization problems. Today's end-users demand interactivity with decision support systems. For optimization software, this means obtaining good-quality solutions quickly. Fast iterative improvement methods, like local search, are suited to satisfying such needs. Here the authors show local search in a new light, in particular presenting a new kind of mathematical programming solver, namely LocalSolver, based on neighborhood search. First, an iconoclast methodology is presented to design and engineer local search algorithms. The authors' concern about industrializing local search approaches is of particular interest for practitioners. This methodology is applied to solve two industrial problems with high economic stakes. Software based on local search induces ex...
Aleph Field Solver Challenge Problem Results Summary
Energy Technology Data Exchange (ETDEWEB)
Hooper, Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, Stan Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Aleph models continuum electrostatic and steady and transient thermal fields using a finite-element method. Much work has gone into expanding the core solver capability to support enriched modeling consisting of multiple interacting fields, special boundary conditions and two-way interfacial coupling with particles modeled using Aleph's complementary particle-in-cell capability. This report provides quantitative evidence for correct implementation of Aleph's field solver via order- of-convergence assessments on a collection of problems of increasing complexity. It is intended to provide Aleph with a pedigree and to establish a basis for confidence in results for more challenging problems important to Sandia's mission that Aleph was specifically designed to address.
Evolving effective incremental SAT solvers with GP
Bader, Mohamed; Poli, R.
2008-01-01
Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.
Four disruptive strategies for removing drug discovery bottlenecks.
Ekins, Sean; Waller, Chris L; Bradley, Mary P; Clark, Alex M; Williams, Antony J
2013-03-01
Drug discovery is shifting focus from industry to outside partners and, in the process, creating new bottlenecks. Technologies like high throughput screening (HTS) have moved to a larger number of academic and institutional laboratories in the USA, with little coordination or consideration of the outputs and creating a translational gap. Although there have been collaborative public-private partnerships in Europe to share pharmaceutical data, the USA has seemingly lagged behind and this may hold it back. Sharing precompetitive data and models may accelerate discovery across the board, while finding the best collaborators, mining social media and mobile approaches to open drug discovery should be evaluated in our efforts to remove drug discovery bottlenecks. We describe four strategies to rectify the current unsustainable situation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Linguistics, cognitive psychology, and the Now-or-Never bottleneck.
Endress, Ansgar D; Katzir, Roni
2016-01-01
Christiansen & Chater (C&C)'s key premise is that "if linguistic information is not processed rapidly, that information is lost for good" (sect. 1, para. 1). From this "Now-or-Never bottleneck" (NNB), C&C derive "wide-reaching and fundamental implications for language processing, acquisition and change as well as for the structure of language itself" (sect. 2, para. 10). We question both the premise and the consequentiality of its purported implications.
Practical solutions for bottlenecks in ecosystem services mapping
Palomo,Ignacio; Willemen,Louise; Drakou,Evangelia; Burkhard,Benjamin; Crossman,Neville; Bellamy,Chloe; Burkhard,Kremena; Campagne,C. Sylvie; Dangol,Anuja; Franke,Jonas; Kulczyk,Sylwia; Le Clec'h,Solen; Malak,Dania; Muñoz,Lorena; Narusevicius,Vytautas
2018-01-01
Backgroun Ecosystem services (ES) mapping is becoming mainstream in many sustainability assessments, but its impact on real world decision-making is still limited. Robustness, enduser relevance and transparency have been identified as key attributes needed for effective ES mapping. However, these requirements are not always met due to multiple challenges, referred to here as bottlenecks, that scientists, practitioners, policy makers and users from other public and private sectors encounter a...
Natural language processing and the Now-or-Never bottleneck.
Gómez-Rodríguez, Carlos
2016-01-01
Researchers, motivated by the need to improve the efficiency of natural language processing tools to handle web-scale data, have recently arrived at models that remarkably match the expected features of human language processing under the Now-or-Never bottleneck framework. This provides additional support for said framework and highlights the research potential in the interaction between applied computational linguistics and cognitive science.
An inbreeding model of associative overdominance during a population bottleneck.
Bierne, N; Tsitrone, A; David, P
2000-08-01
Associative overdominance, the fitness difference between heterozygotes and homozygotes at a neutral locus, is classically described using two categories of models: linkage disequilibrium in small populations or identity disequilibrium in infinite, partially selfing populations. In both cases, only equilibrium situations have been considered. In the present study, associative overdominance is related to the distribution of individual inbreeding levels (i.e., genomic autozygosity). Our model integrates the effects of physical linkage and variation in inbreeding history among individual pedigrees. Hence, linkage and identity disequilibrium, traditionally presented as alternatives, are summarized within a single framework. This allows studying nonequilibrium situations in which both occur simultaneously. The model is applied to the case of an infinite population undergoing a sustained population bottleneck. The effects of bottleneck size, mating system, marker gene diversity, deleterious genomic mutation parameters, and physical linkage are evaluated. Bottlenecks transiently generate much larger associative overdominance than observed in equilibrium finite populations and represent a plausible explanation of empirical results obtained, for instance, in marine species. Moreover, the main origin of associative overdominance is random variation in individual inbreeding whereas physical linkage has little effect.
The nocturnal bottleneck and the evolution of activity patterns in mammals
Gerkema, Menno P.; Davies, Wayne I. L.; Foster, Russell G.; Menaker, Michael; Hut, Roelof A.
2013-01-01
In 1942, Walls described the concept of a ‘nocturnal bottleneck’ in placental mammals, where these species could survive only by avoiding daytime activity during times in which dinosaurs were the dominant taxon. Walls based this concept of a longer episode of nocturnality in early eutherian mammals by comparing the visual systems of reptiles, birds and all three extant taxa of the mammalian lineage, namely the monotremes, marsupials (now included in the metatherians) and placentals (included in the eutherians). This review describes the status of what has become known as the nocturnal bottleneck hypothesis, giving an overview of the chronobiological patterns of activity. We review the ecological plausibility that the activity patterns of (early) eutherian mammals were restricted to the night, based on arguments relating to endothermia, energy balance, foraging and predation, taking into account recent palaeontological information. We also assess genes, relating to light detection (visual and non-visual systems) and the photolyase DNA protection system that were lost in the eutherian mammalian lineage. Our conclusion presently is that arguments in favour of the nocturnal bottleneck hypothesis in eutherians prevail. PMID:23825205
The Openpipeflow Navier–Stokes solver
Directory of Open Access Journals (Sweden)
Ashley P. Willis
2017-01-01
Full Text Available Pipelines are used in a huge range of industrial processes involving fluids, and the ability to accurately predict properties of the flow through a pipe is of fundamental engineering importance. Armed with parallel MPI, Arnoldi and Newton–Krylov solvers, the Openpipeflow code can be used in a range of settings, from large-scale simulation of highly turbulent flow, to the detailed analysis of nonlinear invariant solutions (equilibria and periodic orbits and their influence on the dynamics of the flow.
New multigrid solver advances in TOPS
International Nuclear Information System (INIS)
Falgout, R D; Brannick, J; Brezina, M; Manteuffel, T; McCormick, S
2005-01-01
In this paper, we highlight new multigrid solver advances in the Terascale Optimal PDE Simulations (TOPS) project in the Scientific Discovery Through Advanced Computing (SciDAC) program. We discuss two new algebraic multigrid (AMG) developments in TOPS: the adaptive smoothed aggregation method (αSA) and a coarse-grid selection algorithm based on compatible relaxation (CR). The αSA method is showing promising results in initial studies for Quantum Chromodynamics (QCD) applications. The CR method has the potential to greatly improve the applicability of AMG
A finite different field solver for dipole modes
International Nuclear Information System (INIS)
Nelson, E.M.
1992-08-01
A finite element field solver for dipole modes in axisymmetric structures has been written. The second-order elements used in this formulation yield accurate mode frequencies with no spurious modes. Quasi-periodic boundaries are included to allow travelling waves in periodic structures. The solver is useful in applications requiring precise frequency calculations such as detuned accelerator structures for linear colliders. Comparisons are made with measurements and with the popular but less accurate field solver URMEL
A finite element field solver for dipole modes
International Nuclear Information System (INIS)
Nelson, E.M.
1992-01-01
A finite element field solver for dipole modes in axisymmetric structures has been written. The second-order elements used in this formulation yield accurate mode frequencies with no spurious modes. Quasi-periodic boundaries are included to allow travelling waves in periodic structures. The solver is useful in applications requiring precise frequency calculations such as detuned accelerator structures for linear colliders. Comparisons are made with measurements and with the popular but less accurate field solver URMEL. (author). 7 refs., 4 figs
GPU TECHNOLOGIES EMBODIED IN PARALLEL SOLVERS OF LINEAR ALGEBRAIC EQUATION SYSTEMS
Directory of Open Access Journals (Sweden)
Sidorov Alexander Vladimirovich
2012-10-01
Full Text Available The author reviews existing shareware solvers that are operated by graphical computer devices. The purpose of this review is to explore the opportunities and limitations of the above parallel solvers applicable for resolution of linear algebraic problems that arise at Research and Educational Centre of Computer Modeling at MSUCE, and Research and Engineering Centre STADYO. The author has explored new applications of the GPU in the PETSc suite and compared them with the results generated absent of the GPU. The research is performed within the CUSP library developed to resolve the problems of linear algebra through the application of GPU. The author has also reviewed the new MAGMA project which is analogous to LAPACK for the GPU.
PCX, Interior-Point Linear Programming Solver
International Nuclear Information System (INIS)
Czyzyk, J.
2004-01-01
1 - Description of program or function: PCX solves linear programming problems using the Mehrota predictor-corrector interior-point algorithm. PCX can be called as a subroutine or used in stand-alone mode, with data supplied from an MPS file. The software incorporates modules that can be used separately from the linear programming solver, including a pre-solve routine and data structure definitions. 2 - Methods: The Mehrota predictor-corrector method is a primal-dual interior-point method for linear programming. The starting point is determined from a modified least squares heuristic. Linear systems of equations are solved at each interior-point iteration via a sparse Cholesky algorithm native to the code. A pre-solver is incorporated in the code to eliminate inefficiencies in the user's formulation of the problem. 3 - Restriction on the complexity of the problem: There are no size limitations built into the program. The size of problem solved is limited by RAM and swap space on the user's computer
Directory of Open Access Journals (Sweden)
Sánchez Álvarez , I.
1998-01-01
Full Text Available La relevancia de los problemas de optimización en el mundo empresarial ha generado la introducción de herramientas de optimización cada vez más sofisticadas en las últimas versiones de las hojas de cálculo de utilización generalizada. Estas utilidades, conocidas habitualmente como «solvers», constituyen una alternativa a los programas especializados de optimización cuando no se trata de problemas de gran escala, presentado la ventaja de su facilidad de uso y de comunicación con el usuario final. Frontline Systems Inc es la empresa que desarrolla el «solver» de Excel, si bien existen asimismo versiones para Lotus y Quattro Pro con ligeras diferencias de uso. En su dirección de internet (www.frontsys.com se puede obtener información técnica sobre las diferentes versiones de dicha utilidad y diversos aspectos operativos del programa, algunos de los cuales se comentan en este trabajo.
A sparse-grid isogeometric solver
Beck, Joakim
2018-02-28
Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.
A sparse version of IGA solvers
Beck, Joakim
2017-07-30
Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.
A sparse-grid isogeometric solver
Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo
2018-01-01
Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.
A sparse version of IGA solvers
Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo
2017-01-01
Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.
Decoupled systems on trial: Eliminating bottlenecks to improve aquaponic processes.
Directory of Open Access Journals (Sweden)
Hendrik Monsees
Full Text Available In classical aquaponics (coupled aquaponic systems, 1-loop systems the production of fish in recirculating aquaculture systems (RAS and plants in hydroponics are combined in a single loop, entailing systemic compromises on the optimal production parameters (e.g. pH. Recently presented decoupled aquaponics (2-loop systems have been awarded for eliminating major bottlenecks. In a pilot study, production in an innovative decoupled aquaponic system was compared with a coupled system and, as a control, a conventional RAS, assessing growth parameters of fish (FCR, SGR and plants over an experimental period of 5 months. Soluble nutrients (NO3--N, NO2--N, NH4+-N, PO43-, K+, Ca2+, Mg2+, SO42-, Cl2- and Fe2+, elemental composition of plants, fish and sludge (N, P, K, Ca, Mg, Na, C, abiotic factors (temperature, pH, oxygen, and conductivity, fertilizer and water consumption were determined. Fruit yield was 36% higher in decoupled aquaponics and pH and fertilizer management was more effective, whereas fish production was comparable in both systems. The results of this pilot study clearly illustrate the main advantages of decoupled, two-loop aquaponics and demonstrate how bottlenecks commonly encountered in coupled aquaponics can be managed to promote application in aquaculture.
Decoupled systems on trial: Eliminating bottlenecks to improve aquaponic processes.
Monsees, Hendrik; Kloas, Werner; Wuertz, Sven
2017-01-01
In classical aquaponics (coupled aquaponic systems, 1-loop systems) the production of fish in recirculating aquaculture systems (RAS) and plants in hydroponics are combined in a single loop, entailing systemic compromises on the optimal production parameters (e.g. pH). Recently presented decoupled aquaponics (2-loop systems) have been awarded for eliminating major bottlenecks. In a pilot study, production in an innovative decoupled aquaponic system was compared with a coupled system and, as a control, a conventional RAS, assessing growth parameters of fish (FCR, SGR) and plants over an experimental period of 5 months. Soluble nutrients (NO3--N, NO2--N, NH4+-N, PO43-, K+, Ca2+, Mg2+, SO42-, Cl2- and Fe2+), elemental composition of plants, fish and sludge (N, P, K, Ca, Mg, Na, C), abiotic factors (temperature, pH, oxygen, and conductivity), fertilizer and water consumption were determined. Fruit yield was 36% higher in decoupled aquaponics and pH and fertilizer management was more effective, whereas fish production was comparable in both systems. The results of this pilot study clearly illustrate the main advantages of decoupled, two-loop aquaponics and demonstrate how bottlenecks commonly encountered in coupled aquaponics can be managed to promote application in aquaculture.
A bottleneck model of set-specific capture.
Directory of Open Access Journals (Sweden)
Katherine Sledge Moore
Full Text Available Set-specific contingent attentional capture is a particularly strong form of capture that occurs when multiple attentional sets guide visual search (e.g., "search for green letters" and "search for orange letters". In this type of capture, a potential target that matches one attentional set (e.g. a green stimulus impairs the ability to identify a temporally proximal target that matches another attentional set (e.g. an orange stimulus. In the present study, we investigated whether set-specific capture stems from a bottleneck in working memory or from a depletion of limited resources that are distributed across multiple attentional sets. In each trial, participants searched a rapid serial visual presentation (RSVP stream for up to three target letters (T1-T3 that could appear in any of three target colors (orange, green, or lavender. The most revealing findings came from trials in which T1 and T2 matched different attentional sets and were both identified. In these trials, T3 accuracy was lower when it did not match T1's set than when it did match, but only when participants failed to identify T2. These findings support a bottleneck model of set-specific capture in which a limited-capacity mechanism in working memory enhances only one attentional set at a time, rather than a resource model in which processing capacity is simultaneously distributed across multiple attentional sets.
A Novel Interactive MINLP Solver for CAPE Applications
DEFF Research Database (Denmark)
Henriksen, Jens Peter; Støy, S.; Russel, Boris Mariboe
2000-01-01
This paper presents an interactive MINLP solver that is particularly suitable for solution of process synthesis, design and analysis problems. The interactive MINLP solver is based on the decomposition based MINLP algorithms, where a NLP sub-problem is solved in the innerloop and a MILP master pr...
Experiences with linear solvers for oil reservoir simulation problems
Energy Technology Data Exchange (ETDEWEB)
Joubert, W.; Janardhan, R. [Los Alamos National Lab., NM (United States); Biswas, D.; Carey, G.
1996-12-31
This talk will focus on practical experiences with iterative linear solver algorithms used in conjunction with Amoco Production Company`s Falcon oil reservoir simulation code. The goal of this study is to determine the best linear solver algorithms for these types of problems. The results of numerical experiments will be presented.
Directory of Open Access Journals (Sweden)
R. Lenort
2013-07-01
Full Text Available Heavy machinery industry is characterized by a number of specific features that cause significant variations in the processing time of products in the individual workplaces and frequent occurrence of floating bottlenecks, which change their positions. Depending on the product range being processed, a given workplace is the bottleneck only for some period of time. When the bottleneck moves to another workplace, it leads to unnecessary loss of capacity of the floating bottleneck. To maximize the utilization, it is necessary to protect those bottlenecks by creating special buffers. The objective of this article is to design a methodology used for the determination and control of buffers that are going to protect the floating bottlenecks from operating capacity losses caused by transfer of the constrain to another workplace. These buffers are referred to as „power buffers“. The designed methodology has been verified in the process of forged pieces machining.
Fungal Beta-Glucosidases: A Bottleneck in Industrial Use of Lignocellulosic Materials
Directory of Open Access Journals (Sweden)
Peter S. Lübeck
2013-09-01
Full Text Available Profitable biomass conversion processes are highly dependent on the use of efficient enzymes for lignocellulose degradation. Among the cellulose degrading enzymes, beta-glucosidases are essential for efficient hydrolysis of cellulosic biomass as they relieve the inhibition of the cellobiohydrolases and endoglucanases by reducing cellobiose accumulation. In this review, we discuss the important role beta-glucosidases play in complex biomass hydrolysis and how they create a bottleneck in industrial use of lignocellulosic materials. An efficient beta-glucosidase facilitates hydrolysis at specified process conditions, and key points to consider in this respect are hydrolysis rate, inhibitors, and stability. Product inhibition impairing yields, thermal inactivation of enzymes, and the high cost of enzyme production are the main obstacles to commercial cellulose hydrolysis. Therefore, this sets the stage in the search for better alternatives to the currently available enzyme preparations either by improving known or screening for new beta-glucosidases.
Directory of Open Access Journals (Sweden)
Milosz Ciznicki
2015-01-01
Full Text Available The recent advent of novel multi- and many-core architectures forces application programmers to deal with hardware-specific implementation details and to be familiar with software optimisation techniques to benefit from new high-performance computing machines. Extra care must be taken for communication-intensive algorithms, which may be a bottleneck for forthcoming era of exascale computing. This paper aims to present a high-level stencil framework implemented for the EULerian or LAGrangian model (EULAG that efficiently utilises multi- and many-cores architectures. Only an efficient usage of both many-core processors (CPUs and graphics processing units (GPUs with the flexible data decomposition method can lead to the maximum performance that scales the communication-intensive Generalized Conjugate Residual (GCR elliptic solver with preconditioner.
Parallel sparse direct solver for integrated circuit simulation
Chen, Xiaoming; Yang, Huazhong
2017-01-01
This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...
High order Poisson Solver for unbounded flows
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2015-01-01
This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field....... The method was implemented on a rectangular domain using fast Fourier transforms (FFT) to increase computational efficiency. The Poisson solver was extended to directly solve the derivatives of the solution. This is achieved either by including the differential operator in the integration kernel...... the equations of fluid mechanics as an example, but can be used in many physical problems to solve the Poisson equation on a rectangular unbounded domain. For the two-dimensional case we propose an infinitely smooth test function which allows for arbitrary high order convergence. Using Gaussian smoothing...
Optimising a parallel conjugate gradient solver
Energy Technology Data Exchange (ETDEWEB)
Field, M.R. [O`Reilly Institute, Dublin (Ireland)
1996-12-31
This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.
Finegold, M.; Mass, R.
1985-01-01
Good problem solvers and poor problem solvers in advanced physics (N=8) were significantly different in their ability in translating, planning, and physical reasoning, as well as in problem solving time; no differences in reliance on algebraic solutions and checking problems were noted. Implications for physics teaching are discussed. (DH)
Client-server computer architecture saves costs and eliminates bottlenecks
International Nuclear Information System (INIS)
Darukhanavala, P.P.; Davidson, M.C.; Tyler, T.N.; Blaskovich, F.T.; Smith, C.
1992-01-01
This paper reports that workstation, client-server architecture saved costs and eliminated bottlenecks that BP Exploration (Alaska) Inc. experienced with mainframe computer systems. In 1991, BP embarked on an ambitious project to change technical computing for its Prudhoe Bay, Endicott, and Kuparuk operations on Alaska's North Slope. This project promised substantial rewards, but also involved considerable risk. The project plan called for reservoir simulations (which historically had run on a Cray Research Inc. X-MP supercomputer in the company's Houston data center) to be run on small computer workstations. Additionally, large Prudhoe Bay, Endicott, and Kuparuk production and reservoir engineering data bases and related applications also would be moved to workstations, replacing a Digital Equipment Corp. VAX cluster in Anchorage
Mask manufacturing improvement through capability definition and bottleneck line management
Strott, Al
1994-02-01
In 1989, Intel's internal mask operation limited itself to research and development activities and re-inspection and pellicle application of externally manufactured masks. Recognizing the rising capital cost of mask manufacturing at the leading edge, Intel's Mask Operation management decided to offset some of these costs by manufacturing more masks internally. This was the beginning of the challenge they set to manufacture at least 50% of Intel's mask volume internally, at world class performance levels. The first step in responding to this challenge was the completion of a comprehensive operation capability analysis. A series of bottleneck improvements by focus teams resulted in an average cycle time improvement to less than five days on all product and less than two days on critical products.
Strategic behavior and social outcomes in a bottleneck queue
DEFF Research Database (Denmark)
Breinbjerg, J.; Sebald, Alexander; Østerdal, L. P.
2016-01-01
the first-in-first-out (FIFO), last-in-first-out (LIFO), and service-in-random-order (SIRO) queue disciplines and compare these predictions to outcomes from a laboratory experiment. In line with our theoretical predictions, we find that people arrive with greater dispersion when participating under the LIFO......We theoretically and experimentally study the differential incentive effects of three well known queue disciplines in a strategic environment in which a bottleneck facility opens and impatient players decide when to arrive. For a class of three-player games, we derive equilibrium arrivals under...... discipline, whereas they tend to arrive immediately under FIFO and SIRO. As a consequence, shorter waiting times are obtained under LIFO as compared to FIFO and SIRO. However, while our theoretical predictions admit higher welfare under LIFO, this is not recovered experimentally as the queue disciplines...
Reducing Concurrency Bottlenecks in Parallel I/O Workloads
Energy Technology Data Exchange (ETDEWEB)
Manzanares, Adam C. [Los Alamos National Laboratory; Bent, John M. [Los Alamos National Laboratory; Wingate, Meghan [Los Alamos National Laboratory
2011-01-01
To enable high performance parallel checkpointing we introduced the Parallel Log Structured File System (PLFS). PLFS is middleware interposed on the file system stack to transform concurrent writing of one application file into many non-concurrently written component files. The promising effectiveness of PLFS makes it important to examine its performance for workloads other than checkpoint capture, notably the different ways that state snapshots may be later read, to make the case for using PLFS in the Exascale I/O stack. Reading a PLFS file involved reading each of its component files. In this paper we identify performance limitations on broader workloads in an early version of PLFS, specifically the need to build and distribute an index for the overall file, and the pressure on the underlying parallel file system's metadata server, and show how PLFS's decomposed components architecture can be exploited to alleviate bottlenecks in the underlying parallel file system.
Sjödin, Per; E Sjöstrand, Agnès; Jakobsson, Mattias; Blum, Michael G B
2012-07-01
Based on the accumulation of genetic, climatic, and fossil evidence, a central theory in paleoanthropology stipulates that a demographic bottleneck coincided with the origin of our species Homo Sapiens. This theory proposes that anatomically modern humans--which were only present in Africa at the time--experienced a drastic bottleneck during the penultimate glacial age (130-190 kya) when a cold and dry climate prevailed. Two scenarios have been proposed to describe the bottleneck, which involve either a fragmentation of the range occupied by humans or the survival of one small group of humans. Here, we analyze DNA sequence data from 61 nuclear loci sequenced in three African populations using Approximate Bayesian Computation and numerical simulations. In contrast to the bottleneck theory, we show that a simple model without any bottleneck during the penultimate ice age has the greatest statistical support compared with bottleneck models. Although the proposed bottleneck is ancient, occurring at least 130 kya, we can discard the possibility that it did not leave detectable footprints in the DNA sequence data except if the bottleneck involves a less than a 3-fold reduction in population size. Finally, we confirm that a simple model without a bottleneck is able to reproduce the main features of the observed patterns of genetic variation. We conclude that models of Pleistocene refugium for modern human origins now require substantial revision.
Cor, Ken; Alves, Cecilia; Gierl, Mark J.
2008-01-01
This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…
Comparison of open-source linear programming solvers.
Energy Technology Data Exchange (ETDEWEB)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
Learning Domain-Specific Heuristics for Answer Set Solvers
Balduccini, Marcello
2010-01-01
In spite of the recent improvements in the performance of Answer Set Programming (ASP) solvers, when the search space is sufficiently large, it is still possible for the search algorithm to mistakenly focus on areas of the search space that contain no solutions or very few. When that happens, performance degrades substantially, even to the point that the solver may need to be terminated before returning an answer. This prospect is a concern when one is considering using such a solver in an in...
A non-conforming 3D spherical harmonic transport solver
Energy Technology Data Exchange (ETDEWEB)
Van Criekingen, S. [Commissariat a l' Energie Atomique CEA-Saclay, DEN/DM2S/SERMA/LENR Bat 470, 91191 Gif-sur-Yvette, Cedex (France)
2006-07-01
A new 3D transport solver for the time-independent Boltzmann transport equation has been developed. This solver is based on the second-order even-parity form of the transport equation. The angular discretization is performed through the expansion of the angular neutron flux in spherical harmonics (PN method). The novelty of this solver is the use of non-conforming finite elements for the spatial discretization. Such elements lead to a discontinuous flux approximation. This interface continuity requirement relaxation property is shared with mixed-dual formulations such as the ones based on Raviart-Thomas finite elements. Encouraging numerical results are presented. (authors)
A non-conforming 3D spherical harmonic transport solver
International Nuclear Information System (INIS)
Van Criekingen, S.
2006-01-01
A new 3D transport solver for the time-independent Boltzmann transport equation has been developed. This solver is based on the second-order even-parity form of the transport equation. The angular discretization is performed through the expansion of the angular neutron flux in spherical harmonics (PN method). The novelty of this solver is the use of non-conforming finite elements for the spatial discretization. Such elements lead to a discontinuous flux approximation. This interface continuity requirement relaxation property is shared with mixed-dual formulations such as the ones based on Raviart-Thomas finite elements. Encouraging numerical results are presented. (authors)
How Strategic Is the Central Bottleneck: Can It Be Overcome by Trying Harder?
Ruthruff, Eric; Johnston, James C.; Remington, Roger W.
2009-01-01
Recent dual-task studies suggest that a bottleneck prevents central mental operations from working on more than one task at a time, especially at relatively low practice levels. It remains highly controversial, however, whether this bottleneck is structural (inherent to human cognitive architecture) or merely a strategic choice. If the strategic…
Anatomy of a bottleneck: diagnosing factors limiting population growth in the Puerto Rican Parrot
S.R. Beissinger; Jr Wunderle; J.M. Meyers; B.E. Saether; S. Engen
2008-01-01
The relative importance of genetic, demographic, environmental, and catastrophic processes that maintain population bottlenecks has received little consideration. We evaluate the role of these factors in maintaining the Puerto Rican Parrot (Amazona vittata) in a prolonged bottleneck from 1973 through 2000 despite intensive conservation efforts. We first conduct a risk...
Refined isogeometric analysis for a preconditioned conjugate gradient solver
Garcia, Daniel; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.
2018-01-01
Starting from a highly continuous Isogeometric Analysis (IGA) discretization, refined Isogeometric Analysis (rIGA) introduces C0 hyperplanes that act as separators for the direct LU factorization solver. As a result, the total computational cost
Two-dimensional time dependent Riemann solvers for neutron transport
International Nuclear Information System (INIS)
Brunner, Thomas A.; Holloway, James Paul
2005-01-01
A two-dimensional Riemann solver is developed for the spherical harmonics approximation to the time dependent neutron transport equation. The eigenstructure of the resulting equations is explored, giving insight into both the spherical harmonics approximation and the Riemann solver. The classic Roe-type Riemann solver used here was developed for one-dimensional problems, but can be used in multidimensional problems by treating each face of a two-dimensional computation cell in a locally one-dimensional way. Several test problems are used to explore the capabilities of both the Riemann solver and the spherical harmonics approximation. The numerical solution for a simple line source problem is compared to the analytic solution to both the P 1 equation and the full transport solution. A lattice problem is used to test the method on a more challenging problem
Resolving Neighbourhood Relations in a Parallel Fluid Dynamic Solver
Frisch, Jerome; Mundani, Ralf-Peter; Rank, Ernst
2012-01-01
solver with a special aspect on the hierarchical data structure, unique cell and grid identification, and the neighbourhood relations in-between grids on different processes. A special server concept keeps track of every grid over all processes while
Advanced Algebraic Multigrid Solvers for Subsurface Flow Simulation
Chen, Meng-Huo; Sun, Shuyu; Salama, Amgad
2015-01-01
and issues will be addressed and the corresponding remedies will be studied. As the multigrid methods are used as the linear solver, the simulator can be parallelized (although not trivial) and the high-resolution simulation become feasible, the ultimately
Parallel iterative solvers and preconditioners using approximate hierarchical methods
Energy Technology Data Exchange (ETDEWEB)
Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
Bottlenecks of motion processing during a visual glance: the leaky flask model.
Directory of Open Access Journals (Sweden)
Haluk Öğmen
Full Text Available Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic memory, visual short-term memory (VSTM, and long-term memory (LTM. It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.
Bottlenecks of motion processing during a visual glance: the leaky flask model.
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.
Overcoming bottlenecks in the membrane protein structural biology pipeline.
Hardy, David; Bill, Roslyn M; Jawhari, Anass; Rothnie, Alice J
2016-06-15
Membrane proteins account for a third of the eukaryotic proteome, but are greatly under-represented in the Protein Data Bank. Unfortunately, recent technological advances in X-ray crystallography and EM cannot account for the poor solubility and stability of membrane protein samples. A limitation of conventional detergent-based methods is that detergent molecules destabilize membrane proteins, leading to their aggregation. The use of orthologues, mutants and fusion tags has helped improve protein stability, but at the expense of not working with the sequence of interest. Novel detergents such as glucose neopentyl glycol (GNG), maltose neopentyl glycol (MNG) and calixarene-based detergents can improve protein stability without compromising their solubilizing properties. Styrene maleic acid lipid particles (SMALPs) focus on retaining the native lipid bilayer of a membrane protein during purification and biophysical analysis. Overcoming bottlenecks in the membrane protein structural biology pipeline, primarily by maintaining protein stability, will facilitate the elucidation of many more membrane protein structures in the near future. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
HIV/AIDS: global trends, global funds and delivery bottlenecks
Directory of Open Access Journals (Sweden)
Hadingham Jacqui
2005-08-01
Full Text Available Abstract Globalisation affects all facets of human life, including health and well being. The HIV/AIDS epidemic has highlighted the global nature of human health and welfare and globalisation has given rise to a trend toward finding common solutions to global health challenges. Numerous international funds have been set up in recent times to address global health challenges such as HIV. However, despite increasingly large amounts of funding for health initiatives being made available to poorer regions of the world, HIV infection rates and prevalence continue to increase world wide. As a result, the AIDS epidemic is expanding and intensifying globally. Worst affected are undoubtedly the poorer regions of the world as combinations of poverty, disease, famine, political and economic instability and weak health infrastructure exacerbate the severe and far-reaching impacts of the epidemic. One of the major reasons for the apparent ineffectiveness of global interventions is historical weaknesses in the health systems of underdeveloped countries, which contribute to bottlenecks in the distribution and utilisation of funds. Strengthening these health systems, although a vital component in addressing the global epidemic, must however be accompanied by mitigation of other determinants as well. These are intrinsically complex and include social and environmental factors, sexual behaviour, issues of human rights and biological factors, all of which contribute to HIV transmission, progression and mortality. An equally important factor is ensuring an equitable balance between prevention and treatment programmes in order to holistically address the challenges presented by the epidemic.
clubber: removing the bioinformatics bottleneck in big data analyses
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
2018-01-01
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment. PMID:28609295
Narrower bottlenecks could be more efficient for concentrating choanoflagellates
Sparacino, J.; Miño, G.; Koehl, M. A. R.; King, N.; Stocker, R.; Banchio, A. J.; Marconi, V. I.
2015-11-01
In evolutionary biology choanoflagellates are broadly investigated as the closest living relatives of the animal ancestors. Under diverse environmental cues, choanoflagellate Salpingoeca rosetta can differentiate in two types of solitary swimming cells: slow and fast microswimmers. Here we present a first phenomenological 2D-model for the choanoflagellates dynamics confined into a flat device divided by a wall of asymmetric microconstrictions. The model allow us to optimize the geometry of the microchannels for directing and concentrating cell populations under strict control. We solve our set of dynamical equations using Langevin dynamics. Experimental parameters for the motility of the slow and fast cells were measured and used for our numerical estimations of the directed transport efficiency, otherwise we have no adjustable parameters. We find remarkable differences in the rectification results for slow and fast choanoflagellates, which give us a strategy to develop a suitable microfluidic sorting device. For a given population velocity, narrower bottlenecks, of similar size to the cell dimension, show to be more efficient as concentrator of populations. Experiments and simulations are in good agreement.
Analysis of registered CDM projects: potential removal of evidenced bottlenecks
Energy Technology Data Exchange (ETDEWEB)
Agosto, D.; Bombard, P.; Gostinelli, F.
2007-07-01
The Clean Development Mechanism (CDM) has developed during its first period of implementation, a distinctive set of patterns. The authors thought of concentrating on the CDM analysis in order to highlight potential remedies or reasons for given bottlenecks. In order to establish a sort of extensive SWOT analysis for CDMs, all the 356 projects actually (November 2006) registered at UNFCCC were examined, together with all the about 1000 PDDs presented to the UNFCCC but not registered yet. The CDM projects have been studied trying to cluster projects according to relevant characteristics, both from a technical and an economic point of view. Chosen indicators are meant to identify: more convenient/more diffused energy system for a CDM; reasons for a geographical distribution of different types of projects; potentials for a future exploitation of lower used technologies in CDM. Conclusions are drawn and appropriate tables and graphs presented. (1) the Baseline Emission Factor, combined to economic patterns, is the pivotal factor that characterizes both choices of host country and technology; (2) some technologies can exploit appropriately CDM scheme, whilst other technologies, are constrained by it. (3) there are still some important weak points: grouping of non Annex I countries; crediting period; criteria for the evaluation of sustainable development. (auth)
clubber: removing the bioinformatics bottleneck in big data analyses.
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
2017-06-13
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these "big data" analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber's goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.
clubber: removing the bioinformatics bottleneck in big data analyses
Directory of Open Access Journals (Sweden)
Miller Maximilian
2017-06-01
Full Text Available With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min clearly illustrate the importance of clubber in the everyday computational biology environment.
Parsing a cognitive task: a characterization of the mind's bottleneck.
Directory of Open Access Journals (Sweden)
Mariano Sigman
2005-02-01
Full Text Available Parsing a mental operation into components, characterizing the parallel or serial nature of this flow, and understanding what each process ultimately contributes to response time are fundamental questions in cognitive neuroscience. Here we show how a simple theoretical model leads to an extended set of predictions concerning the distribution of response time and its alteration by simultaneous performance of another task. The model provides a synthesis of psychological refractory period and random-walk models of response time. It merely assumes that a task consists of three consecutive stages-perception, decision based on noisy integration of evidence, and response-and that the perceptual and motor stages can operate simultaneously with stages of another task, while the central decision process constitutes a bottleneck. We designed a number-comparison task that provided a thorough test of the model by allowing independent variations in number notation, numerical distance, response complexity, and temporal asynchrony relative to an interfering probe task of tone discrimination. The results revealed a parsing of the comparison task in which each variable affects only one stage. Numerical distance affects the integration process, which is the only step that cannot proceed in parallel and has a major contribution to response time variability. The other stages, mapping the numeral to an internal quantity and executing the motor response, can be carried out in parallel with another task. Changing the duration of these processes has no significant effect on the variance.
Eye shape and the nocturnal bottleneck of mammals.
Hall, Margaret I; Kamilar, Jason M; Kirk, E Christopher
2012-12-22
Most vertebrate groups exhibit eye shapes that vary predictably with activity pattern. Nocturnal vertebrates typically have large corneas relative to eye size as an adaptation for increased visual sensitivity. Conversely, diurnal vertebrates generally demonstrate smaller corneas relative to eye size as an adaptation for increased visual acuity. By contrast, several studies have concluded that many mammals exhibit typical nocturnal eye shapes, regardless of activity pattern. However, a recent study has argued that new statistical methods allow eye shape to accurately predict activity patterns of mammals, including cathemeral species (animals that are equally likely to be awake and active at any time of day or night). Here, we conduct a detailed analysis of eye shape and activity pattern in mammals, using a broad comparative sample of 266 species. We find that the eye shapes of cathemeral mammals completely overlap with nocturnal and diurnal species. Additionally, most diurnal and cathemeral mammals have eye shapes that are most similar to those of nocturnal birds and lizards. The only mammalian clade that diverges from this pattern is anthropoids, which have convergently evolved eye shapes similar to those of diurnal birds and lizards. Our results provide additional evidence for a nocturnal 'bottleneck' in the early evolution of crown mammals.
Clogging transition of many-particle systems flowing through bottlenecks
Zuriguel, Iker; Parisi, Daniel Ricardo; Hidalgo, Raúl Cruz; Lozano, Celia; Janda, Alvaro; Gago, Paula Alejandra; Peralta, Juan Pablo; Ferrer, Luis Miguel; Pugnaloni, Luis Ariel; Clément, Eric; Maza, Diego; Pagonabarraga, Ignacio; Garcimartín, Angel
2014-12-01
When a large set of discrete bodies passes through a bottleneck, the flow may become intermittent due to the development of clogs that obstruct the constriction. Clogging is observed, for instance, in colloidal suspensions, granular materials and crowd swarming, where consequences may be dramatic. Despite its ubiquity, a general framework embracing research in such a wide variety of scenarios is still lacking. We show that in systems of very different nature and scale -including sheep herds, pedestrian crowds, assemblies of grains, and colloids- the probability distribution of time lapses between the passages of consecutive bodies exhibits a power-law tail with an exponent that depends on the system condition. Consequently, we identify the transition to clogging in terms of the divergence of the average time lapse. Such a unified description allows us to put forward a qualitative clogging state diagram whose most conspicuous feature is the presence of a length scale qualitatively related to the presence of a finite size orifice. This approach helps to understand paradoxical phenomena, such as the faster-is-slower effect predicted for pedestrians evacuating a room and might become a starting point for researchers working in a wide variety of situations where clogging represents a hindrance.
A Python interface to Diffpack-based classes and solvers
Munthe-Kaas, Heidi Vikki
2013-01-01
Python is a programming language that has gained a lot of popularity during the last 15 years, and as a very easy-to-learn and flexible scripting language it is very well suited for computa- tional science, both in mathematics and in physics. Diffpack is a PDE library written in C++, made for easier implementation of both smaller PDE solvers and for larger libraries of simu- lators. It contains large class hierarchies for different solvers, grids, arrays, parallel computing and almost everyth...
International Nuclear Information System (INIS)
Anton, Luis; MartI, Jose M; Ibanez, Jose M; Aloy, Miguel A.; Mimica, Petar; Miralles, Juan A.
2010-01-01
We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, and can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.
Comparing direct and iterative equation solvers in a large structural analysis software system
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
A robust multilevel simultaneous eigenvalue solver
Costiner, Sorin; Taasan, Shlomo
1993-01-01
Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.
Integrated Variable Speed Limits Control and Ramp Metering for Bottleneck Regions on Freeway
Directory of Open Access Journals (Sweden)
Ming-hui Ma
2015-01-01
Full Text Available To enhance the efficiency of the existing freeway system and therefore to mitigate traffic congestion and related problems on the freeway mainline lane-drop bottleneck region, the advanced strategy for bottleneck control is essential. This paper proposes a method that integrates variable speed limits and ramp metering for freeway bottleneck region control to relieve the chaos in bottleneck region. To this end, based on the analyses of spatial-temporal patterns of traffic flow, a macroscopic traffic flow model is extended to describe the traffic flow operating characteristic by considering the impacts of variable speed limits in mainstream bottleneck region. In addition, to achieve the goal of balancing the priority of the vehicles on mainline and on-ramp, increasing capacity, and reducing travel delay on bottleneck region, an improved control model, as well as an advanced control strategy that integrates variable speed limits and ramp metering, is developed. The proposed method is tested in simulation for a real freeway infrastructure feed and calibrates real traffic variables. The results demonstrate that the proposed method can substantially improve the traffic flow efficiency of mainline and on-ramp and enhance the quality of traffic flow at the investigated freeway mainline bottleneck.
Damage to white matter bottlenecks contributes to language impairments after left hemispheric stroke
Directory of Open Access Journals (Sweden)
Joseph C. Griffis
2017-01-01
Full Text Available Damage to the white matter underlying the left posterior temporal lobe leads to deficits in multiple language functions. The posterior temporal white matter may correspond to a bottleneck where both dorsal and ventral language pathways are vulnerable to simultaneous damage. Damage to a second putative white matter bottleneck in the left deep prefrontal white matter involving projections associated with ventral language pathways and thalamo-cortical projections has recently been proposed as a source of semantic deficits after stroke. Here, we first used white matter atlases to identify the previously described white matter bottlenecks in the posterior temporal and deep prefrontal white matter. We then assessed the effects of damage to each region on measures of verbal fluency, picture naming, and auditory semantic decision-making in 43 chronic left hemispheric stroke patients. Damage to the posterior temporal bottleneck predicted deficits on all tasks, while damage to the anterior bottleneck only significantly predicted deficits in verbal fluency. Importantly, the effects of damage to the bottleneck regions were not attributable to lesion volume, lesion loads on the tracts traversing the bottlenecks, or damage to nearby cortical language areas. Multivariate lesion-symptom mapping revealed additional lesion predictors of deficits. Post-hoc fiber tracking of the peak white matter lesion predictors using a publicly available tractography atlas revealed evidence consistent with the results of the bottleneck analyses. Together, our results provide support for the proposal that spatially specific white matter damage affecting bottleneck regions, particularly in the posterior temporal lobe, contributes to chronic language deficits after left hemispheric stroke. This may reflect the simultaneous disruption of signaling in dorsal and ventral language processing streams.
Bottlenecks and Waiting Points in Nucleosynthesis in X-ray bursts and Novae
Smith, Michael S.; Sunayama, Tomomi; Hix, W. Raphael; Lingerfelt, Eric J.; Nesaraja, Caroline D.
2010-08-01
To better understand the energy generation and element synthesis occurring in novae and X-ray bursts, we give quantitative definitions to the concepts of ``bottlenecks'' and ``waiting points'' in the thermonuclear reaction flow. We use these criteria to search for bottlenecks and waiting points in post-processing element synthesis explosion simulations. We have incorporated these into the Computational Infrastructure for Nuclear Astrophysics, a suite of nuclear astrophysics codes available online at nucastrodata.org, so that anyone may perform custom searches for bottlenecks and waiting points.
Refined isogeometric analysis for a preconditioned conjugate gradient solver
Garcia, Daniel
2018-02-12
Starting from a highly continuous Isogeometric Analysis (IGA) discretization, refined Isogeometric Analysis (rIGA) introduces C0 hyperplanes that act as separators for the direct LU factorization solver. As a result, the total computational cost required to solve the corresponding system of equations using a direct LU factorization solver dramatically reduces (up to a factor of 55) Garcia et al. (2017). At the same time, rIGA enriches the IGA spaces, thus improving the best approximation error. In this work, we extend the complexity analysis of rIGA to the case of iterative solvers. We build an iterative solver as follows: we first construct the Schur complements using a direct solver over small subdomains (macro-elements). We then assemble those Schur complements into a global skeleton system. Subsequently, we solve this system iteratively using Conjugate Gradients (CG) with an incomplete LU (ILU) preconditioner. For a 2D Poisson model problem with a structured mesh and a uniform polynomial degree of approximation, rIGA achieves moderate savings with respect to IGA in terms of the number of Floating Point Operations (FLOPs) and computational time (in seconds) required to solve the resulting system of linear equations. For instance, for a mesh with four million elements and polynomial degree p=3, the iterative solver is approximately 2.6 times faster (in time) when applied to the rIGA system than to the IGA one. These savings occur because the skeleton rIGA system contains fewer non-zero entries than the IGA one. The opposite situation occurs for 3D problems, and as a result, 3D rIGA discretizations provide no gains with respect to their IGA counterparts when considering iterative solvers.
BCYCLIC: A parallel block tridiagonal matrix cyclic solver
Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.
2010-09-01
A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.
MINOS: A simplified Pn solver for core calculation
International Nuclear Information System (INIS)
Baudron, A.M.; Lautard, J.J.
2007-01-01
This paper describes a new generation of the neutronic core solver MINOS resulting from developments done in the DESCARTES project. For performance reasons, the numerical method of the existing MINOS solver in the SAPHYR system has been reused in the new system. It is based on the mixed-dual finite element approximation of the simplified transport equation. We have extended the previous method to the treatment of unstructured geometries composed by quadrilaterals, allowing us to treat geometries where fuel pins are exactly represented. For Cartesian geometries, the solver takes into account assembly discontinuity coefficients in the simplified P n context. The solver has been rewritten in C + + programming language using an object-oriented design. Its general architecture was reconsidered in order to improve its capability of evolution and its maintainability. Moreover, the performance of the previous version has been improved mainly regarding the matrix construction time; this result improves significantly the performance of the solver in the context of industrial application requiring thermal-hydraulic feedback and depletion calculations. (authors)
Managing Innovation Probabilities: Core-driven vs. Bottleneck-removing Innovations
Tsutomu Harada
2015-01-01
This paper provides a simplified framework of focusing devices that generate different patterns of innovation, i.e., core-driven and bottleneck-removing innovations, and discusses the managerial implications. We show that core-driven innovation should be undertaken when technology components are independent (independent technology system), while bottleneck-removing innovation should be pursued when they are interdependent (interdependent technology system). Different types of focusing device ...
Bottleneck analysis at district level to illustrate gaps within the district health system in Uganda
Kiwanuka Henriksson, Dorcus; Fredriksson, Mio; Waiswa, Peter; Selling, Katarina; Swartling Peterson, Stefan
2017-01-01
ABSTRACT Background: Poor quality of care and access to effective and affordable interventions have been attributed to constraints and bottlenecks within and outside the health system. However, there is limited understanding of health system barriers to utilization and delivery of appropriate, high-impact, and cost-effective interventions at the point of service delivery in districts and sub-districts in low-income countries. In this study we illustrate the use of the bottleneck analysis approach, which could be used to identify bottlenecks in service delivery within the district health system. Methods: A modified Tanahashi model with six determinants for effective coverage was used to determine bottlenecks in service provision for maternal and newborn care. The following interventions provided during antenatal care were used as tracer interventions: use of iron and folic acid, intermittent presumptive treatment for malaria, HIV counseling and testing, and syphilis testing. Data from cross-sectional household and health facility surveys in Mayuge and Namayingo districts in Uganda were used in this study. Results: Effective coverage and human resource gaps were identified as the biggest bottlenecks in both districts, with coverage ranging from 0% to 66% for effective coverage and from 46% to 58% for availability of health facility staff. Our findings revealed a similar pattern in bottlenecks in both districts for particular interventions although the districts are functionally independent. Conclusion: The modified Tanahashi model is an analysis tool that can be used to identify bottlenecks to effective coverage within the district health system, for instance, the effective coverage for maternal and newborn care interventions. However, the analysis is highly dependent on the availability of data to populate all six determinants and could benefit from further validation analysis for the causes of bottlenecks identified. PMID:28581379
Integrating Problem Solvers from Analogous Markets in New Product Ideation
DEFF Research Database (Denmark)
Franke, Nikolaus; Poetz, Marion; Schreier, Martin
2014-01-01
Who provides better inputs to new product ideation tasks, problem solvers with expertise in the area for which new products are to be developed or problem solvers from “analogous” markets that are distant but share an analogous problem or need? Conventional wisdom appears to suggest that target...... market expertise is indispensable, which is why most managers searching for new ideas tend to stay within their own market context even when they do search outside their firms' boundaries. However, in a unique symmetric experiment that isolates the effect of market origin, we find evidence...... for the opposite: Although solutions provided by problem solvers from analogous markets show lower potential for immediate use, they demonstrate substantially higher levels of novelty. Also, compared to established novelty drivers, this effect appears highly relevant from a managerial perspective: we find...
An efficient spectral crystal plasticity solver for GPU architectures
Malahe, Michael
2018-03-01
We present a spectral crystal plasticity (CP) solver for graphics processing unit (GPU) architectures that achieves a tenfold increase in efficiency over prior GPU solvers. The approach makes use of a database containing a spectral decomposition of CP simulations performed using a conventional iterative solver over a parameter space of crystal orientations and applied velocity gradients. The key improvements in efficiency come from reducing global memory transactions, exposing more instruction-level parallelism, reducing integer instructions and performing fast range reductions on trigonometric arguments. The scheme also makes more efficient use of memory than prior work, allowing for larger problems to be solved on a single GPU. We illustrate these improvements with a simulation of 390 million crystal grains on a consumer-grade GPU, which executes at a rate of 2.72 s per strain step.
Neubauer, Thomas A; Harzhauser, Mathias; Georgopoulou, Elisavet; Wrozyna, Claudia
2014-11-15
For more than hundred years the thermal spring-fed Lake Pețea near Oradea, Romania, was studied for its highly endemic subfossil and recent fauna and flora. One point of focus was the species lineage of the melanopsid gastropod Microcolpia parreyssii , which exhibited a tremendous diversity of shapes during the earlier Holocene. As a consequence many new species, subspecies, and variety-names have been introduced over time, trying to categorize this overwhelming variability. In contrast to the varied subfossil assemblage, only a single phenotype is present today. We critically review the apparent "speciation event" implied by the taxonomy, based on the presently available information and new data from morphometric analyses of shell outlines and oxygen and carbon isotope data. This synthesis shows that one turning point in morphological evolution coincides with high accumulation of peaty deposits during a short time interval of maximally a few thousand years. The formation of a small, highly eutrophic swamp with increased input of organic matter marginalized the melanopsids and reduced population size. The presented data make natural selection as the dominating force unlikely but rather indicates genetic drift following a bottleneck effect induced by the environmental changes. This claim contrasts the "obvious trend" and shows that great morphological variability has to be carefully and objectively evaluated in order to allow sound interpretations of the underlying mechanisms.
Metabolic Engineering of Yeast to Produce Fatty Acid-derived Biofuels: Bottlenecks and Solutions
Directory of Open Access Journals (Sweden)
Jiayuan eSheng
2015-06-01
Full Text Available Fatty acid-derived biofuels can be a better solution than bioethanol to replace petroleum fuel, since they have similar energy content and combustion properties as current transportation fuels. The environmentally friendly microbial fermentation process has been used to synthesize advanced biofuels from renewable feedstock. Due to their robustness as well as the high tolerance to fermentation inhibitors and phage contamination, yeast strains such as Saccharomyces cerevisiae and Yarrowia lipolytica have attracted tremendous attention in recent studies regarding the production of fatty acid-derived biofuels, including fatty acids, fatty acid ethyl esters, fatty alcohols, and fatty alkanes. However, the native yeast strains cannot produce fatty acids and fatty acid-derived biofuels in large quantities. To this end, we have summarized recent publications in this review on metabolic engineering of yeast strains to improve the production of fatty acid-derived biofuels, identified the bottlenecks that limit the productivity of biofuels, and categorized the appropriate approaches to overcome these obstacles.
Prabhu, Ashish A; Boro, Bibari; Bharali, Biju; Chakraborty, Shuchishloka; Dasu, V Venkata
2018-03-28
Process development involving system metabolic engineering and bioprocess engineering has become one of the major thrust for the development of therapeutic proteins or enzymes. Pichia pastoris has emerged as a prominent host for the production of therapeutic protein or enzymes. Despite of producing high protein titers, various cellular and process level bottlenecks hinders the expression of recombinant proteins in P. pastoris. In the present review, we have summarized the recent developments in the expression of foreign proteins in P. pastoris. Further, we have discussed various cellular engineering strategies which include codon optimization, pathway engineering, signal peptide processing, development of protease deficient strain and glyco-engineered strains for the high yield protein secretion of recombinant protein. Bioprocess development of recombinant proteins in large scale bioreactor including medium optimization, optimum feeding strategy and co-substrate feeding in fed batch as well as continuous cultivation have been described. The recent advances in system and synthetic biology studies including metabolic flux analysis in understanding the phenotypic characteristics of recombinant Pichia and genome editing with CRISPR-CAS system have also been summarized. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
On Cafesat: A Modern SAT Solver for Scala
Blanc, Régis William
2013-01-01
We present CafeSat, a SAT solver written in the Scala programming language. CafeSat is a modern solver based on DPLL and featuring many state-of-the-art techniques and heuristics. It uses two-watched literals for Boolean constraint propagation, conflict-driven learning along with clause deletion, a restarting strategy, and the VSIDS heuristics for choosing the branching literal. CafeSat is both sound and complete. In order to achieve reasonnable performances, low level and hand-tuned data ...
MINARET: Towards a time-dependent neutron transport parallel solver
International Nuclear Information System (INIS)
Baudron, A.M.; Lautard, J.J.; Maday, Y.; Mula, O.
2013-01-01
We present the newly developed time-dependent 3D multigroup discrete ordinates neutron transport solver that has recently been implemented in the MINARET code. The solver is the support for a study about computing acceleration techniques that involve parallel architectures. In this work, we will focus on the parallelization of two of the variables involved in our equation: the angular directions and the time. This last variable has been parallelized by a (time) domain decomposition method called the para-real in time algorithm. (authors)
LAPACKrc: Fast linear algebra kernels/solvers for FPGA accelerators
International Nuclear Information System (INIS)
Gonzalez, Juan; Nunez, Rafael C
2009-01-01
We present LAPACKrc, a family of FPGA-based linear algebra solvers able to achieve more than 100x speedup per commodity processor on certain problems. LAPACKrc subsumes some of the LAPACK and ScaLAPACK functionalities, and it also incorporates sparse direct and iterative matrix solvers. Current LAPACKrc prototypes demonstrate between 40x-150x speedup compared against top-of-the-line hardware/software systems. A technology roadmap is in place to validate current performance of LAPACKrc in HPC applications, and to increase the computational throughput by factors of hundreds within the next few years.
Fast Laplace solver approach to pore-scale permeability
Arns, C. H.; Adler, P. M.
2018-02-01
We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.
Parallel sparse direct solvers for Poisson's equation in streamer discharges
M. Nool (Margreet); M. Genseberger (Menno); U. M. Ebert (Ute)
2017-01-01
textabstractThe aim of this paper is to examine whether a hybrid approach of parallel computing, a combination of the message passing model (MPI) with the threads model (OpenMP) can deliver good performance in streamer discharge simulations. Since one of the bottlenecks of almost all streamer
Continuous-time quantum Monte Carlo impurity solvers
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as
Directory of Open Access Journals (Sweden)
Andre J Szameitat
2016-03-01
Full Text Available Human information processing suffers from severe limitations in parallel processing. In particular, when required to respond to two stimuli in rapid succession, processing bottlenecks may appear at central and peripheral stages of task processing. Importantly, it has been suggested that executive functions are needed to resolve the interference arising at such bottlenecks. The aims of the present study were to test whether central attentional limitations (i.e., bottleneck at the decisional response selection stage as well as peripheral limitations (i.e., bottleneck at response initiation both demand executive functions located in the lateral prefrontal cortex. For this, we re-analysed two previous studies, in which a total of 33 participants performed a dual-task according to the paradigm of the psychological refractory period (PRP during fMRI. In one study (N=17, the PRP task consisted of two two-choice response tasks known to suffer from a central bottleneck (CB group. In the other study (N=16, the PRP task consisted of two simple-response tasks known to suffer from a peripheral bottleneck (PB group. Both groups showed considerable dual-task costs in form of slowing of the second response in the dual-task (PRP effect. Imaging results are based on the subtraction of both single-tasks from the dual-task within each group. In the CB group, the bilateral middle frontal gyri and inferior frontal gyri were activated. Higher activation in these areas was associated with lower dual-task costs. In the PB group, the right middle frontal and inferior frontal gyrus were activated. Here, higher activation was associated with higher dual-task costs. In conclusion we suggest that central and peripheral bottlenecks both demand executive functions located in lateral prefrontal cortices. Differences between the CB and PB groups with respect to the exact prefrontal areas activated and the correlational patterns suggest that the executive functions resolving
Problem Solvers: Solutions--Playing Basketball
Smith, Jeffrey
2014-01-01
In this article, fourth grade Upper Allen Elementary School (Mechanicsburg, Pennsylvania) teacher Jeffrey Smith describes his exploration of the Playing Basketball activity. Herein he describes how he found the problem to be an effective way to review concepts associated with the measurement of elapsed time with his students. Additionally, it…
A General Symbolic PDE Solver Generator: Explicit Schemes
Directory of Open Access Journals (Sweden)
K. Sheshadri
2003-01-01
Full Text Available A symbolic solver generator to deal with a system of partial differential equations (PDEs in functions of an arbitrary number of variables is presented; it can also handle arbitrary domains (geometries of the independent variables. Given a system of PDEs, the solver generates a set of explicit finite-difference methods to any specified order, and a Fourier stability criterion for each method. For a method that is stable, an iteration function is generated symbolically using the PDE and its initial and boundary conditions. This iteration function is dynamically generated for every PDE problem, and its evaluation provides a solution to the PDE problem. A C++/Fortran 90 code for the iteration function is generated using the MathCode system, which results in a performance gain of the order of a thousand over Mathematica, the language that has been used to code the solver generator. Examples of stability criteria are presented that agree with known criteria; examples that demonstrate the generality of the solver and the speed enhancement of the generated C++ and Fortran 90 codes are also presented.
Numerical solver for compressible two-fluid flow
J. Naber (Jorick)
2005-01-01
textabstractThis report treats the development of a numerical solver for the simulation of flows of two non-mixing fluids described by the two-dimensional Euler equations. A level-set equation in conservative form describes the interface. After each time step the deformed level-set function is
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we
Fast Multipole-Based Elliptic PDE Solver and Preconditioner
Ibeid, Huda
2016-01-01
extrapolated scalability. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle-based methods in astrophysics and molecular dynamics. FMM is more than an N-body solver, however. Recent efforts to view the FMM
Implementation and testing of a multivariate inverse radiation transport solver
International Nuclear Information System (INIS)
Mattingly, John; Mitchell, Dean J.
2012-01-01
Detection, identification, and characterization of special nuclear materials (SNM) all face the same basic challenge: to varying degrees, each must infer the presence, composition, and configuration of the SNM by analyzing a set of measured radiation signatures. Solutions to this problem implement inverse radiation transport methods. Given a set of measured radiation signatures, inverse radiation transport estimates properties of the source terms and transport media that are consistent with those signatures. This paper describes one implementation of a multivariate inverse radiation transport solver. The solver simultaneously analyzes gamma spectrometry and neutron multiplicity measurements to fit a one-dimensional radiation transport model with variable layer thicknesses using nonlinear regression. The solver's essential components are described, and its performance is illustrated by application to benchmark experiments conducted with plutonium metal. - Highlights: ► Inverse problems, specifically applied to identifying and characterizing radiation sources . ► Radiation transport. ► Analysis of gamma spectroscopy and neutron multiplicity counting measurements. ► Experimental testing of the inverse solver against measurements of plutonium.
A High Performance QDWH-SVD Solver using Hardware Accelerators
Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E.
2015-01-01
few digits of accuracy, compared to the full double precision floating point arithmetic. We further leverage the single GPU QDWH-SVD implementation by introducing the first multi-GPU SVD solver to study the scalability of the QDWH-SVD framework.
Hypersonic simulations using open-source CFD and DSMC solvers
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
Implementing parallel elliptic solver on a Beowulf cluster
Directory of Open Access Journals (Sweden)
Marcin Paprzycki
1999-12-01
Full Text Available In a recent paper cite{zara} a parallel direct solver for the linear systems arising from elliptic partial differential equations has been proposed. The aim of this note is to present the initial evaluation of the performance characteristics of this algorithm on Beowulf-type cluster. In this context the performance of PVM and MPI based implementations is compared.
Implementation of Generalized Adjoint Equation Solver for DeCART
International Nuclear Information System (INIS)
Han, Tae Young; Cho, Jin Young; Lee, Hyun Chul; Noh, Jae Man
2013-01-01
In this paper, the generalized adjoint solver based on the generalized perturbation theory is implemented on DeCART and the verification calculations were carried out. As the results, the adjoint flux for the general response coincides with the reference solution and it is expected that the solver could produce the parameters for the sensitivity and uncertainty analysis. Recently, MUSAD (Modules of Uncertainty and Sensitivity Analysis for DeCART) was developed for the uncertainty analysis of PMR200 core and the fundamental adjoint solver was implemented into DeCART. However, the application of the code was limited to the uncertainty to the multiplication factor, k eff , because it was based on the classical perturbation theory. For the uncertainty analysis to the general response as like the power density, it is necessary to develop the analysis module based on the generalized perturbation theory and it needs the generalized adjoint solutions from DeCART. In this paper, the generalized adjoint solver is implemented on DeCART and the calculation results are compared with the results by TSUNAMI of SCALE 6.1
SolveDB: Integrating Optimization Problem Solvers Into SQL Databases
DEFF Research Database (Denmark)
Siksnys, Laurynas; Pedersen, Torben Bach
2016-01-01
for optimization problems, (2) an extensible infrastructure for integrating different solvers, and (3) query optimization techniques to achieve the best execution performance and/or result quality. Extensive experiments with the PostgreSQL-based implementation show that SolveDB is a versatile tool offering much...
A Parallel Algebraic Multigrid Solver on Graphics Processing Units
Haase, Gundolf; Liebmann, Manfred; Douglas, Craig C.; Plank, Gernot
2010-01-01
-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster
Analysis of transient plasmonic interactions using an MOT-PMCHWT integral equation solver
Uysal, Ismail Enes; Ulku, Huseyin Arda; Bagci, Hakan
2014-01-01
that discretize only on the interfaces. Additionally, IE solvers implicitly enforce the radiation condition and consequently do not need (approximate) absorbing boundary conditions. Despite these advantages, IE solvers, especially in time domain, have not been
Parallel Solver for H(div) Problems Using Hybridization and AMG
Energy Technology Data Exchange (ETDEWEB)
Lee, Chak S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-01-15
In this paper, a scalable parallel solver is proposed for H(div) problems discretized by arbitrary order finite elements on general unstructured meshes. The solver is based on hybridization and algebraic multigrid (AMG). Unlike some previously studied H(div) solvers, the hybridization solver does not require discrete curl and gradient operators as additional input from the user. Instead, only some element information is needed in the construction of the solver. The hybridization results in a H1-equivalent symmetric positive definite system, which is then rescaled and solved by AMG solvers designed for H1 problems. Weak and strong scaling of the method are examined through several numerical tests. Our numerical results show that the proposed solver provides a promising alternative to ADS, a state-of-the-art solver [12], for H(div) problems. In fact, it outperforms ADS for higher order elements.
Dickson, Kim E; Kinney, Mary V; Moxon, Sarah G; Ashton, Joanne; Zaka, Nabila; Simen-Kapeu, Aline; Sharma, Gaurav; Kerber, Kate J; Daelmans, Bernadette; Gülmezoglu, A; Mathai, Matthews; Nyange, Christabel; Baye, Martina; Lawn, Joy E
2015-01-01
The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for
A High Performance QDWH-SVD Solver using Hardware Accelerators
Sukkari, Dalal E.
2015-04-08
This paper describes a new high performance implementation of the QR-based Dynamically Weighted Halley Singular Value Decomposition (QDWH-SVD) solver on multicore architecture enhanced with multiple GPUs. The standard QDWH-SVD algorithm was introduced by Nakatsukasa and Higham (SIAM SISC, 2013) and combines three successive computational stages: (1) the polar decomposition calculation of the original matrix using the QDWH algorithm, (2) the symmetric eigendecomposition of the resulting polar factor to obtain the singular values and the right singular vectors and (3) the matrix-matrix multiplication to get the associated left singular vectors. A comprehensive test suite highlights the numerical robustness of the QDWH-SVD solver. Although it performs up to two times more flops when computing all singular vectors compared to the standard SVD solver algorithm, our new high performance implementation on single GPU results in up to 3.8x improvements for asymptotic matrix sizes, compared to the equivalent routines from existing state-of-the-art open-source and commercial libraries. However, when only singular values are needed, QDWH-SVD is penalized by performing up to 14 times more flops. The singular value only implementation of QDWH-SVD on single GPU can still run up to 18% faster than the best existing equivalent routines. Integrating mixed precision techniques in the solver can additionally provide up to 40% improvement at the price of losing few digits of accuracy, compared to the full double precision floating point arithmetic. We further leverage the single GPU QDWH-SVD implementation by introducing the first multi-GPU SVD solver to study the scalability of the QDWH-SVD framework.
Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers
Energy Technology Data Exchange (ETDEWEB)
Tang, Yu-Hang, E-mail: yuhang_tang@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI (United States); Kudo, Shuhei, E-mail: shuhei-kudo@outlook.jp [Graduate School of System Informatics, Kobe University, 1-1 Rokkodai-cho, Nada-ku, Kobe, 657-8501 (Japan); Bian, Xin, E-mail: xin_bian@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI (United States); Li, Zhen, E-mail: zhen_li@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI (United States); Collaboratory on Mathematics for Mesoscopic Modeling of Materials, Pacific Northwest National Laboratory, Richland, WA 99354 (United States)
2015-09-15
Graphical abstract: - Abstract: Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM)
Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers
Bjorner, Nikolaj
2010-01-01
The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings
Migration of vectorized iterative solvers to distributed memory architectures
Energy Technology Data Exchange (ETDEWEB)
Pommerell, C. [AT& T Bell Labs., Murray Hill, NJ (United States); Ruehl, R. [CSCS-ETH, Manno (Switzerland)
1994-12-31
Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment. In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.
Bottlenecks and Hubs in Inferred Networks Are Important for Virulence in Salmonella typhimurium
Energy Technology Data Exchange (ETDEWEB)
McDermott, Jason E.; Taylor, Ronald C.; Yoon, Hyunjin; Heffron, Fred
2009-02-01
Recent advances in experimental methods have provided sufficient data to consider systems as large networks of interconnected components. High-throughput determination of protein-protein interaction networks has led to the observation that topological bottlenecks, that is proteins defined by high centrality in the network, are enriched in proteins with systems-level phenotypes such as essentiality. Global transcriptional profiling by microarray analysis has been used extensively to characterize systems, for example, cellular response to environmental conditions and genetic mutations. These transcriptomic datasets have been used to infer regulatory and functional relationship networks based on co-regulation. We use the context likelihood of relatedness (CLR) method to infer networks from two datasets gathered from the pathogen Salmonella typhimurium; one under a range of environmental culture conditions and the other from deletions of 15 regulators found to be essential in virulence. Bottleneck nodes were identified from these inferred networks and we show that these nodes are significantly more likely to be essential for virulence than their non-bottleneck counterparts. A network generated using Pearson correlation did not display this behavior. Overall this study demonstrates that topology of networks inferred from global transcriptional profiles provides information about the systems-level roles of bottleneck genes. Analysis of the differences between the two CLR-derived networks suggests that the bottleneck nodes are either mediators of transitions between system states or sentinels that reflect the dynamics of these transitions.
DEFF Research Database (Denmark)
Andersen, Michael; Abel, Sarah Maria Niebe; Erleben, Kenny
2017-01-01
We address the task of computing solutions for a separating fluid-solid wall boundary condition model. We present an embarrassingly parallel, easy to implement, fluid LCP solver.We are able to use greater domain sizes than previous works have shown, due to our new solver. The solver exploits matr...
Sobel Leonard, Ashley; Weissman, Daniel B; Greenbaum, Benjamin; Ghedin, Elodie; Koelle, Katia
2017-07-15
The bottleneck governing infectious disease transmission describes the size of the pathogen population transferred from the donor to the recipient host. Accurate quantification of the bottleneck size is particularly important for rapidly evolving pathogens such as influenza virus, as narrow bottlenecks reduce the amount of transferred viral genetic diversity and, thus, may decrease the rate of viral adaptation. Previous studies have estimated bottleneck sizes governing viral transmission by using statistical analyses of variants identified in pathogen sequencing data. These analyses, however, did not account for variant calling thresholds and stochastic viral replication dynamics within recipient hosts. Because these factors can skew bottleneck size estimates, we introduce a new method for inferring bottleneck sizes that accounts for these factors. Through the use of a simulated data set, we first show that our method, based on beta-binomial sampling, accurately recovers transmission bottleneck sizes, whereas other methods fail to do so. We then apply our method to a data set of influenza A virus (IAV) infections for which viral deep-sequencing data from transmission pairs are available. We find that the IAV transmission bottleneck size estimates in this study are highly variable across transmission pairs, while the mean bottleneck size of 196 virions is consistent with a previous estimate for this data set. Furthermore, regression analysis shows a positive association between estimated bottleneck size and donor infection severity, as measured by temperature. These results support findings from experimental transmission studies showing that bottleneck sizes across transmission events can be variable and influenced in part by epidemiological factors. IMPORTANCE The transmission bottleneck size describes the size of the pathogen population transferred from the donor to the recipient host and may affect the rate of pathogen adaptation within host populations. Recent
The effect of an extreme and prolonged population bottleneck on patterns of deleterious variation
DEFF Research Database (Denmark)
Pedersen, Casper-Emil Tingskov; Lohmueller, Kirk E.; Grarup, Niels
2017-01-01
The genetic consequences of population bottlenecks on patterns of deleterious genetic variation in human populations are of tremendous interest. Based on exome sequencing of 18 Greenlandic Inuit we show that the Inuit have undergone a severe ∼20,000-year-long bottleneck. This has led to a markedly...... more extreme distribution of allele frequencies than seen for any other human population tested to date, making the Inuit the perfect population for investigating the effect of a bottleneck on patterns of deleterious variation. When comparing proxies for genetic load that assume an additive effect...... of deleterious alleles, the Inuit show, at most, a slight increase in load compared to European, East Asian, and African populations. Specifically, we observe
A recent bottleneck of Y chromosome diversity coincides with a global change in culture
Karmin, Monika; Saag, Lauri; Vicente, Má rio; Sayres, Melissa A. Wilson; Jä rve, Mari; Talas, Ulvi Gerst; Rootsi, Siiri; Ilumä e, Anne-Mai; Mä gi, Reedik; Mitt, Mario; Pagani, Luca; Puurand, Tarmo; Faltyskova, Zuzana; Clemente, Florian; Cardona, Alexia; Metspalu, Ene; Sahakyan, Hovhannes; Yunusbayev, Bayazit; Hudjashov, Georgi; DeGiorgio, Michael; Loogvä li, Eva-Liis; Eichstaedt, Christina; Eelmets, Mikk; Chaubey, Gyaneshwer; Tambets, Kristiina; Litvinov, Sergei; Mormina, Maru; Xue, Yali; Ayub, Qasim; Zoraqi, Grigor; Korneliussen, Thorfinn Sand; Akhatova, Farida; Lachance, Joseph; Tishkoff, Sarah; Momynaliev, Kuvat; Ricaut, Franç ois-Xavier; Kusuma, Pradiptajati; Razafindrazaka, Harilanto; Pierron, Denis; Cox, Murray P.; Sultana, Gazi Nurun Nahar; Willerslev, Rane; Muller, Craig; Westaway, Michael; Lambert, David; Skaro, Vedrana; Kovačevic´ , Lejla; Turdikulova, Shahlo; Dalimova, Dilbar; Khusainova, Rita; Trofimova, Natalya; Akhmetova, Vita; Khidiyatova, Irina; Lichman, Daria V.; Isakova, Jainagul; Pocheshkhova, Elvira; Sabitov, Zhaxylyk; Barashkov, Nikolay A.; Nymadawa, Pagbajabyn; Mihailov, Evelin; Seng, Joseph Wee Tien; Evseeva, Irina; Migliano, Andrea Bamberg; Abdullah, Syafiq; Andriadze, George; Primorac, Dragan; Atramentova, Lubov; Utevska, Olga; Yepiskoposyan, Levon; Marjanovic´ , Damir; Kushniarevich, Alena; Behar, Doron M.; Gilissen, Christian; Vissers, Lisenka; Veltman, Joris A.; Balanovska, Elena; Derenko, Miroslava; Malyarchuk, Boris; Metspalu, Andres; Fedorova, Sardana; Eriksson, Anders; Manica, Andrea; Mendez, Fernando L.; Karafet, Tatiana M.; Veeramah, Krishna R.; Bradman, Neil; Hammer, Michael F.; Osipova, Ludmila P.; Balanovsky, Oleg; Khusnutdinova, Elza K.; Johnsen, Knut; Remm, Maido; Thomas, Mark G.; Tyler-Smith, Chris; Underhill, Peter A.; Willerslev, Eske; Nielsen, Rasmus; Metspalu, Mait; Villems, Richard; Kivisild, Toomas
2015-01-01
It is commonly thought that human genetic diversity in non-African populations was shaped primarily by an out-of-Africa dispersal 50–100 thousand yr ago (kya). Here, we present a study of 456 geographically diverse high-coverage Y chromosome sequences, including 299 newly reported samples. Applying ancient DNA calibration, we date the Y-chromosomal most recent common ancestor (MRCA) in Africa at 254 (95% CI 192–307) kya and detect a cluster of major non-African founder haplogroups in a narrow time interval at 47–52 kya, consistent with a rapid initial colonization model of Eurasia and Oceania after the out-of-Africa bottleneck. In contrast to demographic reconstructions based on mtDNA, we infer a second strong bottleneck in Y-chromosome lineages dating to the last 10 ky. We hypothesize that this bottleneck is caused by cultural changes affecting variance of reproductive success among males.
A recent bottleneck of Y chromosome diversity coincides with a global change in culture
Karmin, Monika
2015-04-30
It is commonly thought that human genetic diversity in non-African populations was shaped primarily by an out-of-Africa dispersal 50–100 thousand yr ago (kya). Here, we present a study of 456 geographically diverse high-coverage Y chromosome sequences, including 299 newly reported samples. Applying ancient DNA calibration, we date the Y-chromosomal most recent common ancestor (MRCA) in Africa at 254 (95% CI 192–307) kya and detect a cluster of major non-African founder haplogroups in a narrow time interval at 47–52 kya, consistent with a rapid initial colonization model of Eurasia and Oceania after the out-of-Africa bottleneck. In contrast to demographic reconstructions based on mtDNA, we infer a second strong bottleneck in Y-chromosome lineages dating to the last 10 ky. We hypothesize that this bottleneck is caused by cultural changes affecting variance of reproductive success among males.
Approximate Riemann solver for the two-fluid plasma model
International Nuclear Information System (INIS)
Shumlak, U.; Loverich, J.
2003-01-01
An algorithm is presented for the simulation of plasma dynamics using the two-fluid plasma model. The two-fluid plasma model is more general than the magnetohydrodynamic (MHD) model often used for plasma dynamic simulations. The two-fluid equations are derived in divergence form and an approximate Riemann solver is developed to compute the fluxes of the electron and ion fluids at the computational cell interfaces and an upwind characteristic-based solver to compute the electromagnetic fields. The source terms that couple the fluids and fields are treated implicitly to relax the stiffness. The algorithm is validated with the coplanar Riemann problem, Langmuir plasma oscillations, and the electromagnetic shock problem that has been simulated with the MHD plasma model. A numerical dispersion relation is also presented that demonstrates agreement with analytical plasma waves
Benchmarking ICRF Full-wave Solvers for ITER
International Nuclear Information System (INIS)
Budny, R.V.; Berry, L.; Bilato, R.; Bonoli, P.; Brambilla, M.; Dumont, R.J.; Fukuyama, A.; Harvey, R.; Jaeger, E.F.; Indireshkumar, K.; Lerche, E.; McCune, D.; Phillips, C.K.; Vdovin, V.; Wright, J.
2011-01-01
Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by six full-wave solver groups to simulate the ICRF electromagnetic fields and heating, and by three of these groups to simulate the current-drive. Approximate agreement is achieved for the predicted heating power for the DT and He4 cases. Factor of two disagreements are found for the cases with second harmonic He3 heating in bulk H cases. Approximate agreement is achieved simulating the ICRF current drive.
Minaret, a deterministic neutron transport solver for nuclear core calculations
International Nuclear Information System (INIS)
Moller, J-Y.; Lautard, J-J.
2011-01-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Comparison of Einstein-Boltzmann solvers for testing general relativity
Bellini, E.; Barreira, A.; Frusciante, N.; Hu, B.; Peirone, S.; Raveri, M.; Zumalacárregui, M.; Avilez-Lopez, A.; Ballardini, M.; Battye, R. A.; Bolliet, B.; Calabrese, E.; Dirian, Y.; Ferreira, P. G.; Finelli, F.; Huang, Z.; Ivanov, M. M.; Lesgourgues, J.; Li, B.; Lima, N. A.; Pace, F.; Paoletti, D.; Sawicki, I.; Silvestri, A.; Skordis, C.; Umiltà, C.; Vernizzi, F.
2018-01-01
We compare Einstein-Boltzmann solvers that include modifications to general relativity and find that, for a wide range of models and parameters, they agree to a high level of precision. We look at three general purpose codes that primarily model general scalar-tensor theories, three codes that model Jordan-Brans-Dicke (JBD) gravity, a code that models f (R ) gravity, a code that models covariant Galileons, a code that models Hořava-Lifschitz gravity, and two codes that model nonlocal models of gravity. Comparing predictions of the angular power spectrum of the cosmic microwave background and the power spectrum of dark matter for a suite of different models, we find agreement at the subpercent level. This means that this suite of Einstein-Boltzmann solvers is now sufficiently accurate for precision constraints on cosmological and gravitational parameters.
Minaret, a deterministic neutron transport solver for nuclear core calculations
Energy Technology Data Exchange (ETDEWEB)
Moller, J-Y.; Lautard, J-J., E-mail: jean-yves.moller@cea.fr, E-mail: jean-jacques.lautard@cea.fr [CEA - Centre de Saclay , Gif sur Yvette (France)
2011-07-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
An alternative solver for the nodal expansion method equations - 106
International Nuclear Information System (INIS)
Carvalho da Silva, F.; Carlos Marques Alvim, A.; Senra Martinez, A.
2010-01-01
An automated procedure for nuclear reactor core design is accomplished by using a quick and accurate 3D nodal code, aiming at solving the diffusion equation, which describes the spatial neutron distribution in the reactor. This paper deals with an alternative solver for nodal expansion method (NEM), with only two inner iterations (mesh sweeps) per outer iteration, thus having the potential to reduce the time required to calculate the power distribution in nuclear reactors, but with accuracy similar to the ones found in conventional NEM. The proposed solver was implemented into a computational system which, besides solving the diffusion equation, also solves the burnup equations governing the gradual changes in material compositions of the core due to fuel depletion. Results confirm the effectiveness of the method for practical purposes. (authors)
A Nonlinear Modal Aeroelastic Solver for FUN3D
Goldman, Benjamin D.; Bartels, Robert E.; Biedron, Robert T.; Scott, Robert C.
2016-01-01
A nonlinear structural solver has been implemented internally within the NASA FUN3D computational fluid dynamics code, allowing for some new aeroelastic capabilities. Using a modal representation of the structure, a set of differential or differential-algebraic equations are derived for general thin structures with geometric nonlinearities. ODEPACK and LAPACK routines are linked with FUN3D, and the nonlinear equations are solved at each CFD time step. The existing predictor-corrector method is retained, whereby the structural solution is updated after mesh deformation. The nonlinear solver is validated using a test case for a flexible aeroshell at transonic, supersonic, and hypersonic flow conditions. Agreement with linear theory is seen for the static aeroelastic solutions at relatively low dynamic pressures, but structural nonlinearities limit deformation amplitudes at high dynamic pressures. No flutter was found at any of the tested trajectory points, though LCO may be possible in the transonic regime.
Parallel Auxiliary Space AMG Solver for $H(div)$ Problems
Energy Technology Data Exchange (ETDEWEB)
Kolev, Tzanio V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-18
We present a family of scalable preconditioners for matrices arising in the discretization of $H(div)$ problems using the lowest order Raviart--Thomas finite elements. Our approach belongs to the class of “auxiliary space''--based methods and requires only the finite element stiffness matrix plus some minimal additional discretization information about the topology and orientation of mesh entities. Also, we provide a detailed algebraic description of the theory, parallel implementation, and different variants of this parallel auxiliary space divergence solver (ADS) and discuss its relations to the Hiptmair--Xu (HX) auxiliary space decomposition of $H(div)$ [SIAM J. Numer. Anal., 45 (2007), pp. 2483--2509] and to the auxiliary space Maxwell solver AMS [J. Comput. Math., 27 (2009), pp. 604--623]. Finally, an extensive set of numerical experiments demonstrates the robustness and scalability of our implementation on large-scale $H(div)$ problems with large jumps in the material coefficients.
Nonlinear Multigrid solver exploiting AMGe Coarse Spaces with Approximation Properties
DEFF Research Database (Denmark)
Christensen, Max la Cour; Villa, Umberto; Engsig-Karup, Allan Peter
The paper introduces a nonlinear multigrid solver for mixed finite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstructured problems is the guaranteed approximation property of the AMGe coarse...... properties of the coarse spaces. With coarse spaces with approximation properties, our FAS approach on unstructured meshes has the ability to be as powerful/successful as FAS on geometrically refined meshes. For comparison, Newton’s method and Picard iterations with an inner state-of-the-art linear solver...... are compared to FAS on a nonlinear saddle point problem with applications to porous media flow. It is demonstrated that FAS is faster than Newton’s method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate...
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS
International Nuclear Information System (INIS)
Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.
2011-01-01
We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunov scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.
Preston, L. A.
2017-12-01
Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories
Approximate Riemann solvers and flux vector splitting schemes for two-phase flow
International Nuclear Information System (INIS)
Toumi, I.; Kumbaro, A.; Paillere, H.
1999-01-01
These course notes, presented at the 30. Von Karman Institute Lecture Series in Computational Fluid Dynamics, give a detailed and through review of upwind differencing methods for two-phase flow models. After recalling some fundamental aspects of two-phase flow modelling, from mixture model to two-fluid models, the mathematical properties of the general 6-equation model are analysed by examining the Eigen-structure of the system, and deriving conditions under which the model can be made hyperbolic. The following chapters are devoted to extensions of state-of-the-art upwind differencing schemes such as Roe's Approximate Riemann Solver or the Characteristic Flux Splitting method to two-phase flow. Non-trivial steps in the construction of such solvers include the linearization, the treatment of non-conservative terms and the construction of a Roe-type matrix on which the numerical dissipation of the schemes is based. Extension of the 1-D models to multi-dimensions in an unstructured finite volume formulation is also described; Finally, numerical results for a variety of test-cases are shown to illustrate the accuracy and robustness of the methods. (authors)
Matlab Geochemistry: An open source geochemistry solver based on MRST
McNeece, C. J.; Raynaud, X.; Nilsen, H.; Hesse, M. A.
2017-12-01
The study of geological systems often requires the solution of complex geochemical relations. To address this need we present an open source geochemical solver based on the Matlab Reservoir Simulation Toolbox (MRST) developed by SINTEF. The implementation supports non-isothermal multicomponent aqueous complexation, surface complexation, ion exchange, and dissolution/precipitation reactions. The suite of tools available in MRST allows for rapid model development, in particular the incorporation of geochemical calculations into transport simulations of multiple phases, complex domain geometry and geomechanics. Different numerical schemes and additional physics can be easily incorporated into the existing tools through the object-oriented framework employed by MRST. The solver leverages the automatic differentiation tools available in MRST to solve arbitrarily complex geochemical systems with any choice of species or element concentration as input. Four mathematical approaches enable the solver to be quite robust: 1) the choice of chemical elements as the basis components makes all entries in the composition matrix positive thus preserving convexity, 2) a log variable transformation is used which transfers the nonlinearity to the convex composition matrix, 3) a priori bounds on variables are calculated from the structure of the problem, constraining Netwon's path and 4) an initial guess is calculated implicitly by sequentially adding model complexity. As a benchmark we compare the model to experimental and semi-analytic solutions of the coupled salinity-acidity transport system. Together with the reservoir simulation capabilities of MRST the solver offers a promising tool for geochemical simulations in reservoir domains for applications in a diversity of fields from enhanced oil recovery to radionuclide storage.
Boltzmann Solver with Adaptive Mesh in Velocity Space
International Nuclear Information System (INIS)
Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.
2011-01-01
We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.
Resolving Neighbourhood Relations in a Parallel Fluid Dynamic Solver
Frisch, Jerome
2012-06-01
Computational Fluid Dynamics simulations require an enormous computational effort if a physically reasonable accuracy should be reached. Therefore, a parallel implementation is inevitable. This paper describes the basics of our implemented fluid solver with a special aspect on the hierarchical data structure, unique cell and grid identification, and the neighbourhood relations in-between grids on different processes. A special server concept keeps track of every grid over all processes while minimising data transfer between the nodes. © 2012 IEEE.
Menu-Driven Solver Of Linear-Programming Problems
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
A contribution to the great Riemann solver debate
Quirk, James J.
1992-01-01
The aims of this paper are threefold: to increase the level of awareness within the shock capturing community to the fact that many Godunov-type methods contain subtle flaws that can cause spurious solutions to be computed; to identify one mechanism that might thwart attempts to produce very high resolution simulations; and to proffer a simple strategy for overcoming the specific failings of individual Riemann solvers.
Applications of 3-D Maxwell solvers to accelerator design
International Nuclear Information System (INIS)
Chou, W.
1990-01-01
This paper gives a brief discussion on various applications of 3-D Maxwell solvers to accelerator design. The work is based on our experience gained during the design of the storage ring of the 7-GeV Advanced Photon Source (APS). It shows that 3-D codes are not replaceable in many cases, and that a lot of work remains to be done in order to establish a solid base for 3-D simulations
Scalable parallel prefix solvers for discrete ordinates transport
International Nuclear Information System (INIS)
Pautz, S.; Pandya, T.; Adams, M.
2009-01-01
The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)
An immersed interface vortex particle-mesh solver
Marichal, Yves; Chatelain, Philippe; Winckelmans, Gregoire
2014-11-01
An immersed interface-enabled vortex particle-mesh (VPM) solver is presented for the simulation of 2-D incompressible viscous flows, in the framework of external aerodynamics. Considering the simulation of free vortical flows, such as wakes and jets, vortex particle-mesh methods already provide a valuable alternative to standard CFD methods, thanks to the interesting numerical properties arising from its Lagrangian nature. Yet, accounting for solid bodies remains challenging, despite the extensive research efforts that have been made for several decades. The present immersed interface approach aims at improving the consistency and the accuracy of one very common technique (based on Lighthill's model) for the enforcement of the no-slip condition at the wall in vortex methods. Targeting a sharp treatment of the wall calls for substantial modifications at all computational levels of the VPM solver. More specifically, the solution of the underlying Poisson equation, the computation of the diffusion term and the particle-mesh interpolation are adapted accordingly and the spatial accuracy is assessed. The immersed interface VPM solver is subsequently validated on the simulation of some challenging impulsively started flows, such as the flow past a cylinder and that past an airfoil. Research Fellow (PhD student) of the F.R.S.-FNRS of Belgium.
Newton-Krylov-BDDC solvers for nonlinear cardiac mechanics
Pavarino, L.F.; Scacchi, S.; Zampini, Stefano
2015-01-01
The aim of this work is to design and study a Balancing Domain Decomposition by Constraints (BDDC) solver for the nonlinear elasticity system modeling the mechanical deformation of cardiac tissue. The contraction–relaxation process in the myocardium is induced by the generation and spread of the bioelectrical excitation throughout the tissue and it is mathematically described by the coupling of cardiac electro-mechanical models consisting of systems of partial and ordinary differential equations. In this study, the discretization of the electro-mechanical models is performed by Q1 finite elements in space and semi-implicit finite difference schemes in time, leading to the solution of a large-scale linear system for the bioelectrical potentials and a nonlinear system for the mechanical deformation at each time step of the simulation. The parallel mechanical solver proposed in this paper consists in solving the nonlinear system with a Newton-Krylov-BDDC method, based on the parallel solution of local mechanical problems and a coarse problem for the so-called primal unknowns. Three-dimensional parallel numerical tests on different machines show that the proposed parallel solver is scalable in the number of subdomains, quasi-optimal in the ratio of subdomain to mesh sizes, and robust with respect to tissue anisotropy.
Direct solvers performance on h-adapted grids
Paszynski, Maciej; Pardo, David; Calo, Victor M.
2015-01-01
We analyse the performance of direct solvers when applied to a system of linear equations arising from an hh-adapted, C0C0 finite element space. Theoretical estimates are derived for typical hh-refinement patterns arising as a result of a point, edge, or face singularity as well as boundary layers. They are based on the elimination trees constructed specifically for the considered grids. Theoretical estimates are compared with experiments performed with MUMPS using the nested-dissection algorithm for construction of the elimination tree from METIS library. The numerical experiments provide the same performance for the cases where our trees are identical with those constructed by the nested-dissection algorithm, and worse performance for some cases where our trees are different. We also present numerical experiments for the cases with mixed singularities, where how to construct optimal elimination trees is unknown. In all analysed cases, the use of hh-adaptive grids significantly reduces the cost of the direct solver algorithm per unknown as compared to uniform grids. The theoretical estimates predict and the experimental data confirm that the computational complexity is linear for various refinement patterns. In most cases, the cost of the direct solver per unknown is lower when employing anisotropic refinements as opposed to isotropic ones.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
Energy Technology Data Exchange (ETDEWEB)
Christensen, Max La Cour [Technical Univ. of Denmark, Lyngby (Denmark); Villa, Umberto E. [Univ. of Texas, Austin, TX (United States); Engsig-Karup, Allan P. [Technical Univ. of Denmark, Lyngby (Denmark); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-01-22
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, our FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.
Newton-Krylov-BDDC solvers for nonlinear cardiac mechanics
Pavarino, L.F.
2015-07-18
The aim of this work is to design and study a Balancing Domain Decomposition by Constraints (BDDC) solver for the nonlinear elasticity system modeling the mechanical deformation of cardiac tissue. The contraction–relaxation process in the myocardium is induced by the generation and spread of the bioelectrical excitation throughout the tissue and it is mathematically described by the coupling of cardiac electro-mechanical models consisting of systems of partial and ordinary differential equations. In this study, the discretization of the electro-mechanical models is performed by Q1 finite elements in space and semi-implicit finite difference schemes in time, leading to the solution of a large-scale linear system for the bioelectrical potentials and a nonlinear system for the mechanical deformation at each time step of the simulation. The parallel mechanical solver proposed in this paper consists in solving the nonlinear system with a Newton-Krylov-BDDC method, based on the parallel solution of local mechanical problems and a coarse problem for the so-called primal unknowns. Three-dimensional parallel numerical tests on different machines show that the proposed parallel solver is scalable in the number of subdomains, quasi-optimal in the ratio of subdomain to mesh sizes, and robust with respect to tissue anisotropy.
Direct solvers performance on h-adapted grids
Paszynski, Maciej
2015-05-27
We analyse the performance of direct solvers when applied to a system of linear equations arising from an hh-adapted, C0C0 finite element space. Theoretical estimates are derived for typical hh-refinement patterns arising as a result of a point, edge, or face singularity as well as boundary layers. They are based on the elimination trees constructed specifically for the considered grids. Theoretical estimates are compared with experiments performed with MUMPS using the nested-dissection algorithm for construction of the elimination tree from METIS library. The numerical experiments provide the same performance for the cases where our trees are identical with those constructed by the nested-dissection algorithm, and worse performance for some cases where our trees are different. We also present numerical experiments for the cases with mixed singularities, where how to construct optimal elimination trees is unknown. In all analysed cases, the use of hh-adaptive grids significantly reduces the cost of the direct solver algorithm per unknown as compared to uniform grids. The theoretical estimates predict and the experimental data confirm that the computational complexity is linear for various refinement patterns. In most cases, the cost of the direct solver per unknown is lower when employing anisotropic refinements as opposed to isotropic ones.
IGA-ADS: Isogeometric analysis FEM using ADS solver
Łoś, Marcin M.; Woźniak, Maciej; Paszyński, Maciej; Lenharth, Andrew; Hassaan, Muhamm Amber; Pingali, Keshav
2017-08-01
In this paper we present a fast explicit solver for solution of non-stationary problems using L2 projections with isogeometric finite element method. The solver has been implemented within GALOIS framework. It enables parallel multi-core simulations of different time-dependent problems, in 1D, 2D, or 3D. We have prepared the solver framework in a way that enables direct implementation of the selected PDE and corresponding boundary conditions. In this paper we describe the installation, implementation of exemplary three PDEs, and execution of the simulations on multi-core Linux cluster nodes. We consider three case studies, including heat transfer, linear elasticity, as well as non-linear flow in heterogeneous media. The presented package generates output suitable for interfacing with Gnuplot and ParaView visualization software. The exemplary simulations show near perfect scalability on Gilbert shared-memory node with four Intel® Xeon® CPU E7-4860 processors, each possessing 10 physical cores (for a total of 40 cores).
NITSOL: A Newton iterative solver for nonlinear systems
Energy Technology Data Exchange (ETDEWEB)
Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)
1996-12-31
Newton iterative methods, also known as truncated Newton methods, are implementations of Newton`s method in which the linear systems that characterize Newton steps are solved approximately using iterative linear algebra methods. Here, we outline a well-developed Newton iterative algorithm together with a Fortran implementation called NITSOL. The basic algorithm is an inexact Newton method globalized by backtracking, in which each initial trial step is determined by applying an iterative linear solver until an inexact Newton criterion is satisfied. In the implementation, the user can specify inexact Newton criteria in several ways and select an iterative linear solver from among several popular {open_quotes}transpose-free{close_quotes} Krylov subspace methods. Jacobian-vector products used by the Krylov solver can be either evaluated analytically with a user-supplied routine or approximated using finite differences of function values. A flexible interface permits a wide variety of preconditioning strategies and allows the user to define a preconditioner and optionally update it periodically. We give details of these and other features and demonstrate the performance of the implementation on a representative set of test problems.
Bottlenecks to Clinical Translation of Direct Brain-Computer Interfaces
Directory of Open Access Journals (Sweden)
Mijail Demian Serruya
2014-12-01
Full Text Available Despite several decades of research into novel brain-implantable devices to treat a range of diseases, only two- cochlear implants for sensorineural hearing loss and deep brain stimulation for movement disorders- have yielded any appreciable clinical benefit. Obstacles to translation include technical factors (e.g., signal loss due to gliosis or micromotion, lack of awareness of current clinical options for patients that the new therapy must outperform, traversing between federal and corporate funding needed to support clinical trials, and insufficient management expertise. This commentary reviews these obstacles preventing the translation of promising new neurotechnologies into clinical application and suggests some principles that interdisciplinary teams in academia and industry could adopt to enhance their chances of success.
Directory of Open Access Journals (Sweden)
X. Huang
2016-11-01
Full Text Available In the Community Earth System Model (CESM, the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
Saccadic Eye Movements Impose a Natural Bottleneck on Visual Short-Term Memory
Ohl, Sven; Rolfs, Martin
2017-01-01
Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory…
Self-organized phenomena of pedestrian counterflow through a wide bottleneck in a channel
Dong, Li-Yun; Lan, Dong-Kai; Li, Xiang
2016-09-01
The pedestrian counterflow through a bottleneck in a channel shows a variety of flow patterns due to self-organization. In order to reveal the underlying mechanism, a cellular automaton model was proposed by incorporating the floor field and the view field which reflects the global information of the studied area and local interactions with others. The presented model can well reproduce typical collective behaviors, such as lane formation. Numerical simulations were performed in the case of a wide bottleneck and typical flow patterns at different density ranges were identified as rarefied flow, laminar flow, interrupted bidirectional flow, oscillatory flow, intermittent flow, and choked flow. The effects of several parameters, such as the size of view field and the width of opening, on the bottleneck flow are also analyzed in detail. The view field plays a vital role in reproducing self-organized phenomena of pedestrian. Numerical results showed that the presented model can capture key characteristics of bottleneck flows. Project supported by the National Basic Research Program of China (Grant No. 2012CB725404) and the National Natural Science Foundation of China (Grant Nos. 11172164 and 11572184).
Two retrievals from a single cue: A bottleneck persists across episodic and semantic memory.
Orscheschek, Franziska; Strobach, Tilo; Schubert, Torsten; Rickard, Timothy
2018-05-01
There is evidence in the literature that two retrievals from long-term memory cannot occur in parallel. To date, however, that work has explored only the case of two retrievals from newly acquired episodic memory. These studies demonstrated a retrieval bottleneck even after dual-retrieval practice. That retrieval bottleneck may be a global property of long-term memory retrieval, or it may apply only to the case of two retrievals from episodic memory. In the current experiments, we explored whether that apparent dual-retrieval bottleneck applies to the case of one retrieval from episodic memory and one retrieval from highly overlearned semantic memory. Across three experiments, subjects learned to retrieve a left or right keypress response form a set of 14 unique word cues (e.g., black-right keypress). In addition, they learned a verbal response which involved retrieving the antonym of the presented cue (e.g., black-"white"). In the dual-retrieval condition, subjects had to retrieve both the keypress response and the antonym word. The results suggest that the retrieval bottleneck is superordinate to specific long-term memory systems and holds across different memory components. In addition, the results support the assumption of a cue-level response chunking account of learned retrieval parallelism.
Memory bottlenecks and memory contention in multi-core Monte Carlo transport codes
International Nuclear Information System (INIS)
Tramm, J.R.; Siegel, A.R.
2013-01-01
The simulation of whole nuclear cores through the use of Monte Carlo codes requires an impracticably long time-to-solution. We have extracted a kernel that executes only the most computationally expensive steps of the Monte Carlo particle transport algorithm - the calculation of macroscopic cross sections - in an effort to expose bottlenecks within multi-core, shared memory architectures. (authors)
Sahli, H.; Bruschini, C.; Kempen, L. van; Schleijpen, H.M.A.; Breejen, E. den
2008-01-01
The EC DELVE Support Action project has analyzed the bottlenecks in the transfer of Humanitarian Demining (HD) technology from technology development to the use in the field, and drawn some lessons learned, basing itself on the assessment of the European Humanitarian Demining Research and Technology
Bottlenecks and opportunities for quality improvement in fresh pineapple supply chains in Benin
Fassinou Hotegni, V.N.; Lommen, W.J.M.; Vorst, van der J.G.A.J.; Agbossou, E.K.; Struik, P.C.
2014-01-01
This study mapped and diagnosed the fresh pineapple supply chains in Benin to identify bottlenecks in pineapple quality improvement for different markets. A research framework was defined that comprised all relevant aspects to be researched. After 54 semi-structured interviews with key informants,
Airlines' strategic interactions and airport pricing in a dynamic bottleneck model of congestion
Silva Montalva, H.E.; Verhoef, E.T.; van den Berg, V.A.C.
2014-01-01
This paper analyzes efficient pricing at a congested airport dominated by a single firm. Unlike much of the previous literature, we combine a dynamic bottleneck model of congestion and a vertical structure model that explicitly considers the role of airlines and passengers. We show that a
About the Role of the Bottleneck/Cork Interface on Oxygen Transfer.
Lagorce-Tachon, Aurélie; Karbowiak, Thomas; Paulin, Christian; Simon, Jean-Marc; Gougeon, Régis D; Bellat, Jean-Pierre
2016-09-07
The transfer of oxygen through a corked bottleneck was investigated using a manometric technique. First, the effect of cork compression on oxygen transfer was evaluated without considering the glass/cork interface. No significant effect of cork compression (at 23% strain, corresponding to the compression level of cork in a bottleneck for still wines) was noticeable on the effective diffusion coefficient of oxygen. The mean value of the effective diffusion coefficient is equal to 10(-8) m(2) s(-1), with a statistical distribution ranging from 10(-10) to 10(-7) m(2) s(-1), which is of the same order of magnitude as for the non-compressed cork. Then, oxygen transfer through cork compressed in a glass bottleneck was determined to assess the effect of the glass/cork interface. In the particular case of a gradient-imposed diffusion of oxygen through our model corked bottleneck system (dry cork without surface treatment; 200 and ∼0 hPa of oxygen on both sides of the sample), the mean effective diffusion coefficient is of 5 × 10(-7) m(2) s(-1), thus revealing the possible importance of the role of the glass/stopper interface in the oxygen transfer.
Directory of Open Access Journals (Sweden)
Jonci N Wolff
Full Text Available In most species mitochondrial DNA (mtDNA is inherited maternally in an apparently clonal fashion, although how this is achieved remains uncertain. Population genetic studies show not only that individuals can harbor more than one type of mtDNA (heteroplasmy but that heteroplasmy is common and widespread across a diversity of taxa. Females harboring a mixture of mtDNAs may transmit varying proportions of each mtDNA type (haplotype to their offspring. However, mtDNA variants are also observed to segregate rapidly between generations despite the high mtDNA copy number in the oocyte, which suggests a genetic bottleneck acts during mtDNA transmission. Understanding the size and timing of this bottleneck is important for interpreting population genetic relationships and for predicting the inheritance of mtDNA based disease, but despite its importance the underlying mechanisms remain unclear. Empirical studies, restricted to mice, have shown that the mtDNA bottleneck could act either at embryogenesis, oogenesis or both. To investigate whether the size and timing of the mitochondrial bottleneck is conserved between distant vertebrates, we measured the genetic variance in mtDNA heteroplasmy at three developmental stages (female, ova and fry in chinook salmon and applied a new mathematical model to estimate the number of segregating units (N(e of the mitochondrial bottleneck between each stage. Using these data we estimate values for mtDNA Ne of 88.3 for oogenesis, and 80.3 for embryogenesis. Our results confirm the presence of a mitochondrial bottleneck in fish, and show that segregation of mtDNA variation is effectively complete by the end of oogenesis. Considering the extensive differences in reproductive physiology between fish and mammals, our results suggest the mechanism underlying the mtDNA bottleneck is conserved in these distant vertebrates both in terms of it magnitude and timing. This finding may lead to improvements in our understanding of
Java Based Symbolic Circuit Solver For Electrical Engineering Curriculum
Directory of Open Access Journals (Sweden)
Ruba Akram Amarin
2012-11-01
Full Text Available The interactive technical electronic book, TechEBook, currently under development at the University of Central Florida (UCF, introduces a paradigm shift by replacing the traditional electrical engineering course with topic-driven modules that provide a useful tool for engineers and scientists. The TechEBook comprises the two worlds of classical circuit books and interactive operating platforms such as iPads, laptops and desktops. The TechEBook provides an interactive applets screen that holds many modules, each of which has a specific application in the self learning process. This paper describes one of the interactive techniques in the TechEBook known as Symbolic Circuit Solver (SymCirc. The SymCirc develops a versatile symbolic based linear circuit with a switches solver. The solver works by accepting a Netlist and the element that the user wants to find the voltage across or current on, as input parameters. Then it either produces the plot or the time domain expression of the output. Frequency domain plots or Symbolic Transfer Functions are also produced. The solver gets its input from a Web-based GUI circuit drawer developed at UCF. Typical simulation tools that electrical engineers encounter are numerical in nature, that is, when presented with an input circuit they iteratively solve the circuit across a set of small time steps. The result is represented as a data set of output versus time, which can be plotted for further inspection. Such results do not help users understand the ultimate nature of circuits as Linear Time Invariant systems with a finite dimensional basis in the solution space. SymCirc provides all simulation results as time domain expressions composed of the basic functions that exclusively include exponentials, sines, cosines and/or t raised to any power. This paper explains the motivation behind SymCirc, the Graphical User Interface front end and how the solver actually works. The paper also presents some examples and
Energy Technology Data Exchange (ETDEWEB)
Fisher, A. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bailey, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaiser, T. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eder, D. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gunney, B. T. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Masters, N. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Koniges, A. E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Anderson, R. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-02-01
Here, we present a novel method for the solution of the diffusion equation on a composite AMR mesh. This approach is suitable for including diffusion based physics modules to hydrocodes that support ALE and AMR capabilities. To illustrate, we proffer our implementations of diffusion based radiation transport and heat conduction in a hydrocode called ALE-AMR. Numerical experiments conducted with the diffusion solver and associated physics packages yield 2nd order convergence in the L_{2} norm.
International Nuclear Information System (INIS)
Gibson, L.L.; Schatz, G.C.; Ratner, M.A.; Davis, M.J.
1987-01-01
We compare quantum and classical mechanics for a collinear model of OCS at an energy (20 000 cm -1 ) where Davis [J. Chem. Phys. 83, 1016 (1985)] had previously found that phase space bottlenecks associated with golden mean tori inhibit classical flow between different chaotic regions in phase space. Accurate quantum eigenfunctions for this two mode system are found by diagonalizing a large basis of complex Gaussian functions, and these are then used to study the evolution of wave packets which have 20 000 cm -1 average energies. By examining phase space (Husimi) distributions associated with the wave functions, we conclude that these golden mean tori do indeed act as bottlenecks which constrain the wave packets to evolve within one (or a combination of) regions. The golden mean tori do not completely determine the boundaries between regions, however. Bottlenecks associated with resonance trapping and with separatrix formation are also involved. The analysis of the Husimi distributions also indicates that each exact eigenstate is nearly always associated with just one region, and because of this, superpositions of eigenstates that are localized within a region remain localized in that region at all times. This last result differs from the classical picture at this energy where flow across the bottlenecks occurs with a 2--4 ps lifetime. Since the classical phase space area through which flux must pass to cross the bottlenecks is small compared to h for OCS, the observed difference between quantum and classical dynamics is not surprising. Examination of the time development of normal mode energies indicates little or no energy flow quantum mechanically for wave packet initial conditions
Directory of Open Access Journals (Sweden)
Sen Parag
2015-01-01
Full Text Available Environmentally conscious manufacturing has become a global attention for the iron and steel manufacturers to prevent global warming and climate change while making money. Iron and steel sector is considered as one of the most polluting sectors in the world. It is also one of the most energy intensive industries. During pig iron manufacturing, there is a number of steps that affect the environment emitting different pollutants. While some step(s may be considered critical to damage the environment among all the steps, some pollutant(s may be considered critical to affect the environment among all the pollutants. This paper proposes environmental bottleneck to consider critical step and critical pollutant simultaneously. Unless environmental bottleneck is improved, environmental performance of the entire manufacturing process may not improve significantly even if other processes (i.e. other than environmental bottleneck are taken care of. Thus, environmental bottleneck must be taken care of properly by the manufacturing organization to enable environmentally conscious manufacturing. Hence, a method should be developed to identify environmental bottleneck. Current research work uses Bayesian Networks (BN to identify environmental bottleneck. The contribution of the paper is to identify the environmental bottleneck for an Indian pig iron manufacturing organization. Results suggest that carbon monoxide (CO emission from the blast furnace is the environmental bottleneck for the current pig iron manufacturing organization. Hence, proper precautions should be considered to control the CO emission from the blast furnace.
Bristol, Rachel M; Tucker, Rachel; Dawson, Deborah A; Horsburgh, Gavin; Prys-Jones, Robert P; Frantz, Alain C; Krupa, Andy; Shah, Nirmal J; Burke, Terry; Groombridge, Jim J
2013-09-01
Re-introduction is an important tool for recovering endangered species; however, the magnitude of genetic consequences for re-introduced populations remains largely unknown, in particular the relative impacts of historical population bottlenecks compared to those induced by conservation management. We characterize 14 microsatellite loci developed for the Seychelles paradise flycatcher and use them to quantify temporal and spatial measures of genetic variation across a 134-year time frame encompassing a historical bottleneck that reduced the species to ~28 individuals in the 1960s, through the initial stages of recovery and across a second contemporary conservation-introduction-induced bottleneck. We then evaluate the relative impacts of the two bottlenecks, and finally apply our findings to inform broader re-introduction strategy. We find a temporal trend of significant decrease in standard measures of genetic diversity across the historical bottleneck, but only a nonsignificant downward trend in number of alleles across the contemporary bottleneck. However, accounting for the different timescales of the two bottlenecks (~40 historical generations versus introduction. In some cases, the loss of genetic diversity per generation can, initially at least, be greater across re-introduction-induced bottlenecks. © 2013 John Wiley & Sons Ltd.
Telescopic Hybrid Fast Solver for 3D Elliptic Problems with Point Singularities
Paszyńska, Anna; Jopek, Konrad; Banaś, Krzysztof; Paszyński, Maciej; Gurgul, Piotr; Lenerth, Andrew; Nguyen, Donald; Pingali, Keshav; Dalcind, Lisandro; Calo, Victor M.
2015-01-01
This paper describes a telescopic solver for two dimensional h adaptive grids with point singularities. The input for the telescopic solver is an h refined two dimensional computational mesh with rectangular finite elements. The candidates for point singularities are first localized over the mesh by using a greedy algorithm. Having the candidates for point singularities, we execute either a direct solver, that performs multiple refinements towards selected point singularities and executes a parallel direct solver algorithm which has logarithmic cost with respect to refinement level. The direct solvers executed over each candidate for point singularity return local Schur complement matrices that can be merged together and submitted to iterative solver. In this paper we utilize a parallel multi-thread GALOIS solver as a direct solver. We use Incomplete LU Preconditioned Conjugated Gradients (ILUPCG) as an iterative solver. We also show that elimination of point singularities from the refined mesh reduces significantly the number of iterations to be performed by the ILUPCG iterative solver.
Telescopic Hybrid Fast Solver for 3D Elliptic Problems with Point Singularities
Paszyńska, Anna
2015-06-01
This paper describes a telescopic solver for two dimensional h adaptive grids with point singularities. The input for the telescopic solver is an h refined two dimensional computational mesh with rectangular finite elements. The candidates for point singularities are first localized over the mesh by using a greedy algorithm. Having the candidates for point singularities, we execute either a direct solver, that performs multiple refinements towards selected point singularities and executes a parallel direct solver algorithm which has logarithmic cost with respect to refinement level. The direct solvers executed over each candidate for point singularity return local Schur complement matrices that can be merged together and submitted to iterative solver. In this paper we utilize a parallel multi-thread GALOIS solver as a direct solver. We use Incomplete LU Preconditioned Conjugated Gradients (ILUPCG) as an iterative solver. We also show that elimination of point singularities from the refined mesh reduces significantly the number of iterations to be performed by the ILUPCG iterative solver.
Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.
Anzt, H; Quintana-Ortí, E S
2014-06-28
While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Simplified Eigen-structure decomposition solver for the simulation of two-phase flow systems
International Nuclear Information System (INIS)
Kumbaro, Anela
2012-01-01
This paper discusses the development of a new solver for a system of first-order non-linear differential equations that model the dynamics of compressible two-phase flow. The solver presents a lower-complexity alternative to Roe-type solvers because it only makes use of a partial Eigen-structure information while maintaining its accuracy: the outcome is hence a good complexity-tractability trade-off to consider as relevant in a large number of situations in the scope of two-phase flow numerical simulation. A number of numerical and physical benchmarks are presented to assess the solver. Comparison between the computational results from the simplified Eigen-structure decomposition solver and the conventional Roe-type solver gives insight upon the issues of accuracy, robustness and efficiency. (authors)
Using Solver Interfaced Virtual Reality in PEACER Design Process
International Nuclear Information System (INIS)
Lee, Hyong Won; Nam, Won Chang; Jeong, Seung Ho; Hwang, Il Soon; Shin, Jong Gye; Kim, Chang Hyo
2006-01-01
The recent research progress in the area of plant design and simulation highlighted the importance of integrating design and analysis models on a unified environment. For currently developed advanced reactors, either for power production or research, this effort has embraced impressive state-of-the-art information and automation technology. The PEACER (Proliferation-resistant, Environment friendly, Accident-tolerant, Continual and Economical Reactor) is one of the conceptual fast reactor system cooled by LBE (Lead Bismuth Eutectic) for nuclear waste transmutation. This reactor system is composed of innovative combination between design process and analysis. To establish an integrated design process by coupling design, analysis, and post-processing technology while minimizing the repetitive and costly manual interactions for design changes, a solver interfaced virtual reality simulation system (SIVR) has been developed for a nuclear transmutation energy system as PEACER. The SIVR was developed using Virtual Reality Modeling Language (VRML) in order to interface a commercial 3D CAD tool with various engineering solvers and to implement virtual reality presentation of results in a neutral format. In this paper, we have shown the SIVR approach viable and effective in the life-cycle management of complex nuclear energy systems, including design, construction and operation. For instance, The HELIOS is a down scaled model of the PEACER prototype to demonstrate the operability and safety as well as preliminary test of PEACER PLM (Product Life-cycle Management) with SIVR (Solver Interfaced Virtual Reality) concepts. Most components are designed by CATIA, which is 3D CAD tool. During the construction, 3D drawing by CATIA was effective to handle and arrange the loop configuration, especially when we changed the design. Most of all, This system shows the transparency of design and operational status of an energy complex to operators and inspectors can help ensure accident
Application of Nearly Linear Solvers to Electric Power System Computation
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Computational aeroelasticity using a pressure-based solver
Kamakoti, Ramji
A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.
Nonlinear multigrid solvers exploiting AMGe coarse spaces with approximation properties
DEFF Research Database (Denmark)
Christensen, Max la Cour; Vassilevski, Panayot S.; Villa, Umberto
2017-01-01
discretizations on general unstructured grids for a large class of nonlinear partial differential equations, including saddle point problems. The approximation properties of the coarse spaces ensure that our FAS approach for general unstructured meshes leads to optimal mesh-independent convergence rates similar...... to those achieved by geometric FAS on a nested hierarchy of refined meshes. In the numerical results, Newton’s method and Picard iterations with state-of-the-art inner linear solvers are compared to our FAS algorithm for the solution of a nonlinear saddle point problem arising from porous media flow...
Parallel implementations of 2D explicit Euler solvers
International Nuclear Information System (INIS)
Giraud, L.; Manzini, G.
1996-01-01
In this work we present a subdomain partitioning strategy applied to an explicit high-resolution Euler solver. We describe the design of a portable parallel multi-domain code suitable for parallel environments. We present several implementations on a representative range of MlMD computers that include shared memory multiprocessors, distributed virtual shared memory computers, as well as networks of workstations. Computational results are given to illustrate the efficiency, the scalability, and the limitations of the different approaches. We discuss also the effect of the communication protocol on the optimal domain partitioning strategy for the distributed memory computers
Algorithms for parallel flow solvers on message passing architectures
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those
A Parallel Algebraic Multigrid Solver on Graphics Processing Units
Haase, Gundolf
2010-01-01
The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core. © 2010 Springer-Verlag.
Modelo de selección de cartera con Solver
Directory of Open Access Journals (Sweden)
P. Fogués Zornoza
2012-04-01
Full Text Available In this paper, we present an example of linear optimization in the context of degrees in Economics or Business Administration and Management. We show techniques that enable students to go deep and investigate in real problems that have been modelled using the Excel platform. The model shown here has been developed by a student and it consists in minimizing the absolute deviations over the average expected return of a portfolio of securities, using the solver tool that it is included in this software.
Use of Tabu Search in a Solver to Map Complex Networks onto Emulab Testbeds
National Research Council Canada - National Science Library
MacDonald, Jason E
2007-01-01
The University of Utah's solver for the testbed mapping problem uses a simulated annealing metaheuristic algorithm to map a researcher's experimental network topology onto available testbed resources...
On the implicit density based OpenFOAM solver for turbulent compressible flows
Fürst, Jiří
The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.
Impact of memory bottleneck on the performance of graphics processing units
Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong
2015-12-01
Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.
Chater, Nick; Christiansen, Morten H
2016-01-01
If human language must be squeezed through a narrow cognitive bottleneck, what are the implications for language processing, acquisition, change, and structure? In our target article, we suggested that the implications are far-reaching and form the basis of an integrated account of many apparently unconnected aspects of language and language processing, as well as suggesting revision of many existing theoretical accounts. With some exceptions, commentators were generally supportive both of the existence of the bottleneck and its potential implications. Many commentators suggested additional theoretical and linguistic nuances and extensions, links with prior work, and relevant computational and neuroscientific considerations; some argued for related but distinct viewpoints; a few, though, felt traditional perspectives were being abandoned too readily. Our response attempts to build on the many suggestions raised by the commentators and to engage constructively with challenges to our approach.
Understanding of empty container movement: A study on a bottleneck at an off-dock depot
Zain, Rosmaizura Mohd; Rahman, Mohd Nizam Ab; Nopiah, Zulkifli Mohd; Saibani, Nizaroyani
2014-09-01
Port not only function as connections between marine and land transportation but also as core business areas. In a port terminal, available space is limited, but the influx of container is growing. The off-dock depot is one of the key supply chain players that hold empty containers in the inventory. Therefore, this paper aims to identify the main factors of bottlenecks or congestion that hinder the rapid movement of empty containers from the off-dock depot to the customers. Thirty interviews were conducted with individuals who are key players in the container supply chain. The data were analyzed using Atlas.ti software and the analytic hierarchy process to rank the priority factors of bottlenecks. Findings show that several pertinent factors act as barriers to the key players in the container movement in the day-to-day operations. In future studies, strategies to overcome fragmentation in the container supply chain and logistics must be determined.
Moncla, Louise H; Zhong, Gongxun; Nelson, Chase W; Dinis, Jorge M; Mutschler, James; Hughes, Austin L; Watanabe, Tokiko; Kawaoka, Yoshihiro; Friedrich, Thomas C
2016-02-10
Avian influenza virus reassortants resembling the 1918 human pandemic virus can become transmissible among mammals by acquiring mutations in hemagglutinin (HA) and polymerase. Using the ferret model, we trace the evolutionary pathway by which an avian-like virus evolves the capacity for mammalian replication and airborne transmission. During initial infection, within-host HA diversity increased drastically. Then, airborne transmission fixed two polymerase mutations that do not confer a detectable replication advantage. In later transmissions, selection fixed advantageous HA1 variants. Transmission initially involved a "loose" bottleneck, which became strongly selective after additional HA mutations emerged. The stringency and evolutionary forces governing between-host bottlenecks may therefore change throughout host adaptation. Mutations occurred in multiple combinations in transmitted viruses, suggesting that mammalian transmissibility can evolve through multiple genetic pathways despite phenotypic constraints. Our data provide a glimpse into avian influenza virus adaptation in mammals, with broad implications for surveillance on potentially zoonotic viruses. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Giovanna Aronne
2017-07-01
Full Text Available Long term survival of a species relies on maintenance of genetic variability and natural selection by means of successful reproduction and generation turnover. Although, basic to monitor the conservation status of a plant species, life history data are rarely available even for threatened species due to the gap between the large amount of information required and the limits in terms of time and available economic resources to gather these data. Here, the focus on bottlenecks in life-cycle of rare endangered plant species is proposed as a resolving approach to address the challenges of feasible conservation actions. Basic considerations for this approach are: (a all biological and ecological studies on plant species can be scientifically important, but not all of them are equally relevant to conservation planning and management requirements; (b under a changing environment, long term survival of a species relies on generation turnover; (c for conservation purposes, priority should be given to studies aimed to focus on bottlenecks in the succession of generations because they prevent, or slow down natural selection processes. The proposed procedure, named Systematic Hazard Analysis of Rare-endangered Plants (SHARP, consists of a preliminary survey of the already available information on the species and two main components. The first component is the identification of the bottlenecks in the life cycle by means of field surveys. The second is the diagnosis of the causes of the bottleneck by appropriate experimental methods. The target is to provide researchers, managers and practitioners with substantiated indications for sustainable conservation measures.
Takayama, Yuki
2014-01-01
Since the seminal work of Henderson (1981), a number of studies examined the effect of staggered work hours by analyzing models of work start time choice that consider the trade-off between negative congestion externalities and positive production externalities. However, these studies described traffic congestion using flow congestion models. This study develops a model of work start time choice with bottleneck congestion and discloses the intrinsic properties of the model. To this end, this ...
Ring-like size segregation in vibrated cylinder with a bottleneck
International Nuclear Information System (INIS)
Kong Xiangzhao; Hu Maobin; Wu Qingsong; Wu Yonghong
2005-01-01
In this Letter, a ring-like segregation pattern of bi-dispersed granular material in a vibrated bottleneck-cylinder is presented. The driving frequency can greatly affect the strength and structure of the convection roll and segregation pattern. The position and height of the ring (cluster of big beads) can be adjusted by altering the vibration frequency. And a heuristic theory is developed to interpret the ring's position dependence on driving frequency
Managing bottlenecks in manual automobile assembly systems using discrete event simulation
Directory of Open Access Journals (Sweden)
Dewa, M.
2013-08-01
Full Text Available Batch model lines are quite handy when the demand for each product is moderate. However, they are characterised by high work-in-progress inventories, lost production time when changing over models, and reduced flexibility when it comes to altering production rates as product demand changes. On the other hand, mixed model lines can offer reduced work-in-progress inventory and increased flexibility. The object of this paper is to illustrate that a manual automobile assembling system can be optimised through managing bottlenecks by ensuring high workstation utilisation, reducing queue lengths before stations and reducing station downtime. A case study from the automobile industry is used for data collection. A model is developed through the use of simulation software. The model is then verified and validated before a detailed bottleneck analysis is conducted. An operational strategy is then proposed for optimal bottleneck management. Although the paper focuses on improving automobile assembly systems in batch mode, the methodology can also be applied in single model manual and automated production lines.
Supply chain bottlenecks in the South African construction industry: Qualitative insights
Directory of Open Access Journals (Sweden)
Poobalan Pillay
2017-07-01
Full Text Available Background: The construction industry in South Africa has a lot of potential but its performance is still restricted by numerous internal and external challenges. Unless these challenges are identified and understood better, further growth of this industry is likely to be hindered, which has negative economic implications for the South African economy. Objectives: This study investigated supply chain bottlenecks faced by the construction industry in South Africa. It also discussed solutions for addressing the identified bottlenecks in order to facilitate the continued development of supply chain management in the construction industry. Method: The study used a qualitative approach in which in-depth interviews were held with purposively selected senior managers drawn from the construction industry in South Africa. Content analysis using ATLAS.ti software was employed to identify the themes from the collected data. Findings: The findings of the study showed that supply chain management in the construction industry in South Africa is constrained by five major bottlenecks: skills and qualifications, procurement practices and systems, supply chain integration, supply chain relationships and the structure of the construction industry. Recommendations for addressing each of these five challenges were put forward. Conclusion: The study concludes that both awareness and application of supply chain management in the construction industry in South Africa remains inhibited, which creates opportunities for further improvements in this area to realise the full potential of the industry.
Onyiah, Pamela; Adamu, Al-Mukhtar Y; Afolabi, Rotimi F; Ajumobi, Olufemi; Ughasoro, Maduka D; Odeyinka, Oluwaseun; Nguku, Patrick; Ajayi, IkeOluwapo O
2018-05-04
We conducted a study to determine stakeholders' perspective of the bottlenecks, concerns and needs to malaria operational research (MOR) agenda setting in Nigeria. Eighty-five (37.9%) stakeholders identified lack of positive behavioural change as the major bottleneck to MOR across the malaria thematic areas comprising of malaria prevention 58.8% (50), case management 34.8% (39), advocacy communication and social mobilisation 4.7% (4) while procurement and supply chain management (PSM) and programme management experts had the least response of 1.2% (1) each. Other bottlenecks were inadequate capacity to implement (13.8%, n = 31), inadequate funds (11.6%, n = 26), poor supply management (9.4%, n = 21), administrative bureaucracy (5.8%, n = 13), inadequacy of experts (1.3%, n = 3) and poor policy implementation (4.9%, n = 11). Of the 31 stakeholders who opined lack of capacity to execute malaria operational research; 17 (54.8%), 10 (32.3%), 3 (9.7%) and 1 (3.2%) were experts in case management, malaria prevention, surveillance, monitoring and evaluation and PSM respectively. Improvement in community enlightenment and awareness strategies; and active involvement of health care workers public and private sectors were identified solutions to lack of positive behavioural change.
A recent bottleneck of Y chromosome diversity coincides with a global change in culture.
Karmin, Monika; Saag, Lauri; Vicente, Mário; Wilson Sayres, Melissa A; Järve, Mari; Talas, Ulvi Gerst; Rootsi, Siiri; Ilumäe, Anne-Mai; Mägi, Reedik; Mitt, Mario; Pagani, Luca; Puurand, Tarmo; Faltyskova, Zuzana; Clemente, Florian; Cardona, Alexia; Metspalu, Ene; Sahakyan, Hovhannes; Yunusbayev, Bayazit; Hudjashov, Georgi; DeGiorgio, Michael; Loogväli, Eva-Liis; Eichstaedt, Christina; Eelmets, Mikk; Chaubey, Gyaneshwer; Tambets, Kristiina; Litvinov, Sergei; Mormina, Maru; Xue, Yali; Ayub, Qasim; Zoraqi, Grigor; Korneliussen, Thorfinn Sand; Akhatova, Farida; Lachance, Joseph; Tishkoff, Sarah; Momynaliev, Kuvat; Ricaut, François-Xavier; Kusuma, Pradiptajati; Razafindrazaka, Harilanto; Pierron, Denis; Cox, Murray P; Sultana, Gazi Nurun Nahar; Willerslev, Rane; Muller, Craig; Westaway, Michael; Lambert, David; Skaro, Vedrana; Kovačevic, Lejla; Turdikulova, Shahlo; Dalimova, Dilbar; Khusainova, Rita; Trofimova, Natalya; Akhmetova, Vita; Khidiyatova, Irina; Lichman, Daria V; Isakova, Jainagul; Pocheshkhova, Elvira; Sabitov, Zhaxylyk; Barashkov, Nikolay A; Nymadawa, Pagbajabyn; Mihailov, Evelin; Seng, Joseph Wee Tien; Evseeva, Irina; Migliano, Andrea Bamberg; Abdullah, Syafiq; Andriadze, George; Primorac, Dragan; Atramentova, Lubov; Utevska, Olga; Yepiskoposyan, Levon; Marjanovic, Damir; Kushniarevich, Alena; Behar, Doron M; Gilissen, Christian; Vissers, Lisenka; Veltman, Joris A; Balanovska, Elena; Derenko, Miroslava; Malyarchuk, Boris; Metspalu, Andres; Fedorova, Sardana; Eriksson, Anders; Manica, Andrea; Mendez, Fernando L; Karafet, Tatiana M; Veeramah, Krishna R; Bradman, Neil; Hammer, Michael F; Osipova, Ludmila P; Balanovsky, Oleg; Khusnutdinova, Elza K; Johnsen, Knut; Remm, Maido; Thomas, Mark G; Tyler-Smith, Chris; Underhill, Peter A; Willerslev, Eske; Nielsen, Rasmus; Metspalu, Mait; Villems, Richard; Kivisild, Toomas
2015-04-01
It is commonly thought that human genetic diversity in non-African populations was shaped primarily by an out-of-Africa dispersal 50-100 thousand yr ago (kya). Here, we present a study of 456 geographically diverse high-coverage Y chromosome sequences, including 299 newly reported samples. Applying ancient DNA calibration, we date the Y-chromosomal most recent common ancestor (MRCA) in Africa at 254 (95% CI 192-307) kya and detect a cluster of major non-African founder haplogroups in a narrow time interval at 47-52 kya, consistent with a rapid initial colonization model of Eurasia and Oceania after the out-of-Africa bottleneck. In contrast to demographic reconstructions based on mtDNA, we infer a second strong bottleneck in Y-chromosome lineages dating to the last 10 ky. We hypothesize that this bottleneck is caused by cultural changes affecting variance of reproductive success among males. © 2015 Karmin et al.; Published by Cold Spring Harbor Laboratory Press.
Zhang, Weihua; Collins, Andrew; Gibson, Jane; Tapper, William J; Hunt, Sarah; Deloukas, Panos; Bentley, David R; Morton, Newton E
2004-12-28
Genetic maps in linkage disequilibrium (LD) units play the same role for association mapping as maps in centimorgans provide at much lower resolution for linkage mapping. Association mapping of genes determining disease susceptibility and other phenotypes is based on the theory of LD, here applied to relations with three phenomena. To test the theory, markers at high density along a 10-Mb continuous segment of chromosome 20q were studied in African-American, Asian, and Caucasian samples. Population structure, whether created by pooling samples from divergent populations or by the mating pattern in a mixed population, is accurately bioassayed from genotype frequencies. The effective bottleneck time for Eurasians is substantially less than for migration out of Africa, reflecting later bottlenecks. The classical dependence of allele frequency on mutation age does not hold for the generally shorter time span of inbreeding and LD. Limitation of the classical theory to mutation age justifies the assumption of constant time in a LD map, except for alleles that were rare at the effective bottleneck time or have arisen since. This assumption is derived from the Malecot model and verified in all samples. Tested measures of relative efficiency, support intervals, and localization error determine the operating characteristics of LD maps that are applicable to every sexually reproducing species, with implications for association mapping, high-resolution linkage maps, evolutionary inference, and identification of recombinogenic sequences.
Development of a Cartesian grid based CFD solver (CARBS)
International Nuclear Information System (INIS)
Vaidya, A.M.; Maheshwari, N.K.; Vijayan, P.K.
2013-12-01
Formulation for 3D transient incompressible CFD solver is developed. The solution of variable property, laminar/turbulent, steady/unsteady, single/multi specie, incompressible with heat transfer in complex geometry will be obtained. The formulation can handle a flow system in which any number of arbitrarily shaped solid and fluid regions are present. The solver is based on the use of Cartesian grids. A method is proposed to handle complex shaped objects and boundaries on Cartesian grids. Implementation of multi-material, different types of boundary conditions, thermo physical properties is also considered. The proposed method is validated by solving two test cases. 1 st test case is that of lid driven flow in inclined cavity. 2 nd test case is the flow over cylinder. The 1 st test case involved steady internal flow subjected to WALL boundaries. The 2 nd test case involved unsteady external flow subjected to INLET, OUTLET and FREE-SLIP boundary types. In both the test cases, non-orthogonal geometry was involved. It was found that, under such a wide conditions, the Cartesian grid based code was found to give results which were matching well with benchmark data. Convergence characteristics are excellent. In all cases, the mass residue was converged to 1E-8. Based on this, development of 3D general purpose code based on the proposed approach can be taken up. (author)
Riemann solvers and undercompressive shocks of convex FPU chains
International Nuclear Information System (INIS)
Herrmann, Michael; Rademacher, Jens D M
2010-01-01
We consider FPU-type atomic chains with general convex potentials. The naive continuum limit in the hyperbolic space–time scaling is the p-system of mass and momentum conservation. We systematically compare Riemann solutions to the p-system with numerical solutions to discrete Riemann problems in FPU chains, and argue that the latter can be described by modified p-system Riemann solvers. We allow the flux to have a turning point, and observe a third type of elementary wave (conservative shocks) in the atomistic simulations. These waves are heteroclinic travelling waves and correspond to non-classical, undercompressive shocks of the p-system. We analyse such shocks for fluxes with one or more turning points. Depending on the convexity properties of the flux we propose FPU-Riemann solvers. Our numerical simulations confirm that Lax shocks are replaced by so-called dispersive shocks. For convex–concave flux we provide numerical evidence that convex FPU chains follow the p-system in generating conservative shocks that are supersonic. For concave–convex flux, however, the conservative shocks of the p-system are subsonic and do not appear in FPU-Riemann solutions
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. III. MULTIGROUP RADIATION HYDRODYNAMICS
International Nuclear Information System (INIS)
Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.; Dolence, J.
2013-01-01
We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.
2010-01-01
In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.
Anisotropic resonator analysis using the Fourier-Bessel mode solver
Gauthier, Robert C.
2018-03-01
A numerical mode solver for optical structures that conform to cylindrical symmetry using Faraday's and Ampere's laws as starting expressions is developed when electric or magnetic anisotropy is present. The technique builds on the existing Fourier-Bessel mode solver which allows resonator states to be computed exploiting the symmetry properties of the resonator and states to reduce the matrix system. The introduction of anisotropy into the theoretical frame work facilitates the inclusion of PML borders permitting the computation of open ended structures and a better estimation of the resonator state quality factor. Matrix populating expressions are provided that can accommodate any material anisotropy with arbitrary orientation in the computation domain. Several example of electrical anisotropic computations are provided for rationally symmetric structures such as standard optical fibers, axial Bragg-ring fibers and bottle resonators. The anisotropy present in the materials introduces off diagonal matrix elements in the permittivity tensor when expressed in cylindrical coordinates. The effects of the anisotropy of computed states are presented and discussed.
Application of alternating decision trees in selecting sparse linear solvers
Bhowmick, Sanjukta; Eijkhout, Victor; Freund, Yoav; Fuentes, Erika; Keyes, David E.
2010-01-01
The solution of sparse linear systems, a fundamental and resource-intensive task in scientific computing, can be approached through multiple algorithms. Using an algorithm well adapted to characteristics of the task can significantly enhance the performance, such as reducing the time required for the operation, without compromising the quality of the result. However, the best solution method can vary even across linear systems generated in course of the same PDE-based simulation, thereby making solver selection a very challenging problem. In this paper, we use a machine learning technique, Alternating Decision Trees (ADT), to select efficient solvers based on the properties of sparse linear systems and runtime-dependent features, such as the stages of simulation. We demonstrate the effectiveness of this method through empirical results over linear systems drawn from computational fluid dynamics and magnetohydrodynamics applications. The results also demonstrate that using ADT can resolve the problem of over-fitting, which occurs when limited amount of data is available. © 2010 Springer Science+Business Media LLC.
Parallel CFD Algorithms for Aerodynamical Flow Solvers on Unstructured Meshes. Parts 1 and 2
Barth, Timothy J.; Kwak, Dochan (Technical Monitor)
1995-01-01
The Advisory Group for Aerospace Research and Development (AGARD) has requested my participation in the lecture series entitled Parallel Computing in Computational Fluid Dynamics to be held at the von Karman Institute in Brussels, Belgium on May 15-19, 1995. In addition, a request has been made from the US Coordinator for AGARD at the Pentagon for NASA Ames to hold a repetition of the lecture series on October 16-20, 1995. I have been asked to be a local coordinator for the Ames event. All AGARD lecture series events have attendance limited to NATO allied countries. A brief of the lecture series is provided in the attached enclosure. Specifically, I have been asked to give two lectures of approximately 75 minutes each on the subject of parallel solution techniques for the fluid flow equations on unstructured meshes. The title of my lectures is "Parallel CFD Algorithms for Aerodynamical Flow Solvers on Unstructured Meshes" (Parts I-II). The contents of these lectures will be largely review in nature and will draw upon previously published work in this area. Topics of my lectures will include: (1) Mesh partitioning algorithms. Recursive techniques based on coordinate bisection, Cuthill-McKee level structures, and spectral bisection. (2) Newton's method for large scale CFD problems. Size and complexity estimates for Newton's method, modifications for insuring global convergence. (3) Techniques for constructing the Jacobian matrix. Analytic and numerical techniques for Jacobian matrix-vector products, constructing the transposed matrix, extensions to optimization and homotopy theories. (4) Iterative solution algorithms. Practical experience with GIVIRES and BICG-STAB matrix solvers. (5) Parallel matrix preconditioning. Incomplete Lower-Upper (ILU) factorization, domain-decomposed ILU, approximate Schur complement strategies.
High-Performance Small-Scale Solvers for Moving Horizon Estimation
DEFF Research Database (Denmark)
Frison, Gianluca; Vukov, Milan; Poulsen, Niels Kjølstad
2015-01-01
implementation techniques focusing on small-scale problems. The proposed MHE solver is implemented using custom linear algebra routines and is compared against implementations using BLAS libraries. Additionally, the MHE solver is interfaced to a code generation tool for nonlinear model predictive control (NMPC...
T2CG1, a package of preconditioned conjugate gradient solvers for TOUGH2
International Nuclear Information System (INIS)
Moridis, G.; Pruess, K.; Antunez, E.
1994-03-01
Most of the computational work in the numerical simulation of fluid and heat flows in permeable media arises in the solution of large systems of linear equations. The simplest technique for solving such equations is by direct methods. However, because of large storage requirements and accumulation of roundoff errors, the application of direct solution techniques is limited, depending on matrix bandwidth, to systems of a few hundred to at most a few thousand simultaneous equations. T2CG1, a package of preconditioned conjugate gradient solvers, has been added to TOUGH2 to complement its direct solver and significantly increase the size of problems tractable on PCs. T2CG1 includes three different solvers: a Bi-Conjugate Gradient (BCG) solver, a Bi-Conjugate Gradient Squared (BCGS) solver, and a Generalized Minimum Residual (GMRES) solver. Results from six test problems with up to 30,000 equations show that T2CG1 (1) is significantly (and invariably) faster and requires far less memory than the MA28 direct solver, (2) it makes possible the solution of very large three-dimensional problems on PCs, and (3) that the BCGS solver is the fastest of the three in the tested problems. Sample problems are presented related to heat and fluid flow at Yucca Mountain and WIPP, environmental remediation by the Thermal Enhanced Vapor Extraction System, and geothermal resources
Identification of severe wind conditions using a Reynolds averaged Navier-Stokes solver
DEFF Research Database (Denmark)
Sørensen, Niels N.; Bechmann, Andreas; Johansen, Jeppe
2007-01-01
The present paper describes the application of a Navier-Stokes solver to predict the presence of severe flow conditions in complex terrain, capturing conditions that may be critical to the siting of wind turbines in the terrain. First it is documented that the flow solver is capable of predicting...
Scalable Newton-Krylov solver for very large power flow problems
Idema, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.
2010-01-01
The power flow problem is generally solved by the Newton-Raphson method with a sparse direct solver for the linear system of equations in each iteration. While this works fine for small power flow problems, we will show that for very large problems the direct solver is very slow and we present
Investigation on the Use of a Multiphase Eulerian CFD solver to simulate breaking waves
DEFF Research Database (Denmark)
Tomaselli, Pietro D.; Christensen, Erik Damgaard
2015-01-01
investigation on a CFD model capable of handling this problem. The model is based on a solver, available in the open-source CFD toolkit OpenFOAM, which combines the Eulerian multi-fluid approach for dispersed flows with a numerical interface sharpening method. The solver, enhanced with additional formulations...
The SX Solver: A New Computer Program for Analyzing Solvent-Extraction Equilibria
International Nuclear Information System (INIS)
McNamara, B.K.; Rapko, B.M.; Lumetta, G.J.
1999-01-01
A new computer program, the SX Solver, has been developed to analyze solvent-extraction equilibria. The program operates out of Microsoft Excel and uses the built-in ''Solver'' function to minimize the sum of the square of the residuals between measured and calculated distribution coefficients. The extraction of nitric acid by tributylphosphate has been modeled to illustrate the program's use
The SX Solver: A Computer Program for Analyzing Solvent-Extraction Equilibria: Version 3.0
International Nuclear Information System (INIS)
Lumetta, Gregg J.
2001-01-01
A new computer program, the SX Solver, has been developed to analyze solvent-extraction equilibria. The program operates out of Microsoft Excel and uses the built-in Solver function to minimize the sum of the square of the residuals between measured and calculated distribution coefficients. The extraction of nitric acid by tributyl phosphate has been modeled to illustrate the programs use
Development of axisymmetric lattice Boltzmann flux solver for complex multiphase flows
Wang, Yan; Shu, Chang; Yang, Li-Ming; Yuan, Hai-Zhuan
2018-05-01
This paper presents an axisymmetric lattice Boltzmann flux solver (LBFS) for simulating axisymmetric multiphase flows. In the solver, the two-dimensional (2D) multiphase LBFS is applied to reconstruct macroscopic fluxes excluding axisymmetric effects. Source terms accounting for axisymmetric effects are introduced directly into the governing equations. As compared to conventional axisymmetric multiphase lattice Boltzmann (LB) method, the present solver has the kinetic feature for flux evaluation and avoids complex derivations of external forcing terms. In addition, the present solver also saves considerable computational efforts in comparison with three-dimensional (3D) computations. The capability of the proposed solver in simulating complex multiphase flows is demonstrated by studying single bubble rising in a circular tube. The obtained results compare well with the published data.
Experimental validation of GADRAS's coupled neutron-photon inverse radiation transport solver
International Nuclear Information System (INIS)
Mattingly, John K.; Mitchell, Dean James; Harding, Lee T.
2010-01-01
Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of α-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.
VCODE, Ordinary Differential Equation Solver for Stiff and Non-Stiff Problems
International Nuclear Information System (INIS)
Cohen, Scott D.; Hindmarsh, Alan C.
2001-01-01
1 - Description of program or function: CVODE is a package written in ANSI standard C for solving initial value problems for ordinary differential equations. It solves both stiff and non stiff systems. In the stiff case, it includes a variety of options for treating the Jacobian of the system, including dense and band matrix solvers, and a preconditioned Krylov (iterative) solver. 2 - Method of solution: Integration is by Adams or BDF (Backward Differentiation Formula) methods, at user option. Corrector iteration is by functional iteration or Newton iteration. For the solution of linear systems within Newton iteration, users can select a dense solver, a band solver, a diagonal approximation, or a preconditioned Generalized Minimal Residual (GMRES) solver. In the dense and band cases, the user can supply a Jacobian approximation or let CVODE generate it internally. In the GMRES case, the pre-conditioner is user-supplied
Minos: a SPN solver for core calculation in the DESCARTES system
International Nuclear Information System (INIS)
Baudron, A.M.; Lautard, J.J.
2005-01-01
This paper describes a new development of a neutronic core solver done in the context of a new generation neutronic reactor computational system, named DESCARTES. For performance reasons, the numerical method of the existing MINOS solver in the SAPHYR system has been reused in the new system. It is based on the mixed dual finite element approximation of the simplified transport equation. The solver takes into account assembly discontinuity coefficients (ADF) in the simplified transport equation (SPN) context. The solver has been rewritten in C++ programming language using an object oriented design. Its general architecture was reconsidered in order to improve its capability of evolution and its maintainability. Moreover, the performances of the old version have been improved mainly regarding the matrix construction time; this result improves significantly the performance of the solver in the context of industrial application requiring thermal hydraulic feedback and depletion calculations. (authors)
Fast Multipole-Based Preconditioner for Sparse Iterative Solvers
Ibeid, Huda; Yokota, Rio; Keyes, David E.
2014-01-01
Among optimal hierarchical algorithms for the computational solution of elliptic problems, the Fast Multipole Method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxed global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Compared with multilevel methods, it is capable of comparable algebraic convergence rates down to the truncation error of the discretized PDE, and it has superior multicore and distributed memory scalability properties on commodity architecture supercomputers.
Workload Characterization of CFD Applications Using Partial Differential Equation Solvers
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.
POSSOL, 2-D Poisson Equation Solver for Nonuniform Grid
International Nuclear Information System (INIS)
Orvis, W.J.
1988-01-01
1 - Description of program or function: POSSOL is a two-dimensional Poisson equation solver for problems with arbitrary non-uniform gridding in Cartesian coordinates. It is an adaptation of the uniform grid PWSCRT routine developed by Schwarztrauber and Sweet at the National Center for Atmospheric Research (NCAR). 2 - Method of solution: POSSOL will solve the Helmholtz equation on an arbitrary, non-uniform grid on a rectangular domain allowing only one type of boundary condition on any one side. It can also be used to handle more than one type of boundary condition on a side by means of a capacitance matrix technique. There are three types of boundary conditions that can be applied: fixed, derivative, or periodic
Extending the QUDA Library with the eigCG Solver
Energy Technology Data Exchange (ETDEWEB)
Strelchenko, Alexei [Fermilab; Stathopoulos, Andreas [William-Mary Coll.
2014-12-12
While the incremental eigCG algorithm [ 1 ] is included in many LQCD software packages, its realization on GPU micro-architectures was still missing. In this session we report our experi- ence of the eigCG implementation in the QUDA library. In particular, we will focus on how to employ the mixed precision technique to accelerate solutions of large sparse linear systems with multiple right-hand sides on GPUs. Although application of mixed precision techniques is a well-known optimization approach for linear solvers, its utilization for the eigenvector com- puting within eigCG requires special consideration. We will discuss implementation aspects of the mixed precision deflation and illustrate its numerical behavior on the example of the Wilson twisted mass fermion matrix inversions
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan; Kolmbauer, Michael; Langer, Ulrich
2010-01-01
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan
2010-10-05
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Diffusion of Zonal Variables Using Node-Centered Diffusion Solver
Energy Technology Data Exchange (ETDEWEB)
Yang, T B
2007-08-06
Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirable to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.
A high order solver for the unbounded Poisson equation
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2013-01-01
. The method is extended to directly solve the derivatives of the solution to Poissonʼs equation. In this way differential operators such as the divergence or curl of the solution field can be solved to the same high order convergence without additional computational effort. The method, is applied......A high order converging Poisson solver is presented, based on the Greenʼs function solution to Poissonʼs equation subject to free-space boundary conditions. The high order convergence is achieved by formulating regularised integration kernels, analogous to a smoothing of the solution field...... and validated, however not restricted, to the equations of fluid mechanics, and can be used in many applications to solve Poissonʼs equation on a rectangular unbounded domain....
A General Symbolic PDE Solver Generator: Beyond Explicit Schemes
Directory of Open Access Journals (Sweden)
K. Sheshadri
2003-01-01
Full Text Available This paper presents an extension of our Mathematica- and MathCode-based symbolic-numeric framework for solving a variety of partial differential equation (PDE problems. The main features of our earlier work, which implemented explicit finite-difference schemes, include the ability to handle (1 arbitrary number of dependent variables, (2 arbitrary dimensionality, and (3 arbitrary geometry, as well as (4 developing finite-difference schemes to any desired order of approximation. In the present paper, extensions of this framework to implicit schemes and the method of lines are discussed. While C++ code is generated, using the MathCode system for the implicit method, Modelica code is generated for the method of lines. The latter provides a preliminary PDE support for the Modelica language. Examples illustrating the various aspects of the solver generator are presented.
GPU accelerated FDTD solver and its application in MRI.
Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S
2010-01-01
The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.
Visualising magnetic fields numerical equation solvers in action
Beeteson, John Stuart
2001-01-01
Visualizing Magnetic Fields: Numerical Equation Solvers in Action provides a complete description of the theory behind a new technique, a detailed discussion of the ways of solving the equations (including a software visualization of the solution algorithms), the application software itself, and the full source code. Most importantly, there is a succinct, easy-to-follow description of each procedure in the code.The physicist Michael Faraday said that the study of magnetic lines of force was greatly influential in leading him to formulate many of those concepts that are now so fundamental to our modern world, proving to him their "great utility as well as fertility." Michael Faraday could only visualize these lines in his mind's eye and, even with modern computers to help us, it has been very expensive and time consuming to plot lines of force in magnetic fields
Fast Multipole-Based Preconditioner for Sparse Iterative Solvers
Ibeid, Huda
2014-05-04
Among optimal hierarchical algorithms for the computational solution of elliptic problems, the Fast Multipole Method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxed global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Compared with multilevel methods, it is capable of comparable algebraic convergence rates down to the truncation error of the discretized PDE, and it has superior multicore and distributed memory scalability properties on commodity architecture supercomputers.
Directory of Open Access Journals (Sweden)
Ruth McNerney
2017-03-01
Full Text Available Whole genome sequencing (WGS can provide a comprehensive analysis of Mycobacterium tuberculosis mutations that cause resistance to anti-tuberculosis drugs. With the deployment of bench-top sequencers and rapid analytical software, WGS is poised to become a useful tool to guide treatment. However, direct sequencing from clinical specimens to provide a full drug resistance profile remains a serious challenge. This article reviews current practices for extracting M. tuberculosis DNA and possible solutions for sampling sputum. Techniques under consideration include enzymatic digestion, physical disruption, chemical degradation, detergent solubilization, solvent extraction, ligand-coated magnetic beads, silica columns, and oligonucleotide pull-down baits. Selective amplification of genomic bacterial DNA in sputum prior to WGS may provide a solution, and differential lysis to reduce the levels of contaminating human DNA is also being explored. To remove this bottleneck and accelerate access to WGS for patients with suspected drug-resistant tuberculosis, it is suggested that a coordinated and collaborative approach be taken to more rapidly optimize, compare, and validate methodologies for sequencing from patient samples.
Incompressible SPH (ISPH) with fast Poisson solver on a GPU
Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.
2018-05-01
This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.
Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers
Energy Technology Data Exchange (ETDEWEB)
Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States)
1994-12-31
Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.
Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U
2017-12-01
The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.
Mitochondrial DNA sequence characteristics modulate the size of the genetic bottleneck.
Wilson, Ian J; Carling, Phillipa J; Alston, Charlotte L; Floros, Vasileios I; Pyle, Angela; Hudson, Gavin; Sallevelt, Suzanne C E H; Lamperti, Costanza; Carelli, Valerio; Bindoff, Laurence A; Samuels, David C; Wonnapinij, Passorn; Zeviani, Massimo; Taylor, Robert W; Smeets, Hubert J M; Horvath, Rita; Chinnery, Patrick F
2016-03-01
With a combined carrier frequency of 1:200, heteroplasmic mitochondrial DNA (mtDNA) mutations cause human disease in ∼1:5000 of the population. Rapid shifts in the level of heteroplasmy seen within a single generation contribute to the wide range in the severity of clinical phenotypes seen in families transmitting mtDNA disease, consistent with a genetic bottleneck during transmission. Although preliminary evidence from human pedigrees points towards a random drift process underlying the shifting heteroplasmy, some reports describe differences in segregation pattern between different mtDNA mutations. However, based on limited observations and with no direct comparisons, it is not clear whether these observations simply reflect pedigree ascertainment and publication bias. To address this issue, we studied 577 mother-child pairs transmitting the m.11778G>A, m.3460G>A, m.8344A>G, m.8993T>G/C and m.3243A>G mtDNA mutations. Our analysis controlled for inter-assay differences, inter-laboratory variation and ascertainment bias. We found no evidence of selection during transmission but show that different mtDNA mutations segregate at different rates in human pedigrees. m.8993T>G/C segregated significantly faster than m.11778G>A, m.8344A>G and m.3243A>G, consistent with a tighter mtDNA genetic bottleneck in m.8993T>G/C pedigrees. Our observations support the existence of different genetic bottlenecks primarily determined by the underlying mtDNA mutation, explaining the different inheritance patterns observed in human pedigrees transmitting pathogenic mtDNA mutations. © The Author 2016. Published by Oxford University Press.
High and distinct range-edge genetic diversity despite local bottlenecks.
Directory of Open Access Journals (Sweden)
Jorge Assis
Full Text Available The genetic consequences of living on the edge of distributional ranges have been the subject of a largely unresolved debate. Populations occurring along persistent low latitude ranges (rear-edge are expected to retain high and unique genetic diversity. In contrast, currently less favourable environmental conditions limiting population size at such range-edges may have caused genetic erosion that prevails over past historical effects, with potential consequences on reducing future adaptive capacity. The present study provides an empirical test of whether population declines towards a peripheral range might be reflected on decreasing diversity and increasing population isolation and differentiation. We compare population genetic differentiation and diversity with trends in abundance along a latitudinal gradient towards the peripheral distribution range of Saccorhiza polyschides, a large brown seaweed that is the main structural species of kelp forests in SW Europe. Signatures of recent bottleneck events were also evaluated to determine whether the recently recorded distributional shifts had a negative influence on effective population size. Our findings show decreasing population density and increasing spatial fragmentation and local extinctions towards the southern edge. Genetic data revealed two well supported groups with a central contact zone. As predicted, higher differentiation and signs of bottlenecks were found at the southern edge region. However, a decrease in genetic diversity associated with this pattern was not verified. Surprisingly, genetic diversity increased towards the edge despite bottlenecks and much lower densities, suggesting that extinctions and recolonizations have not strongly reduced diversity or that diversity might have been even higher there in the past, a process of shifting genetic baselines.
Laboratory colonisation and genetic bottlenecks in the tsetse fly Glossina pallidipes.
Directory of Open Access Journals (Sweden)
Marc Ciosi
2014-02-01
Full Text Available The IAEA colony is the only one available for mass rearing of Glossina pallidipes, a vector of human and animal African trypanosomiasis in eastern Africa. This colony is the source for Sterile Insect Technique (SIT programs in East Africa. The source population of this colony is unclear and its genetic diversity has not previously been evaluated and compared to field populations.We examined the genetic variation within and between the IAEA colony and its potential source populations in north Zimbabwe and the Kenya/Uganda border at 9 microsatellites loci to retrace the demographic history of the IAEA colony. We performed classical population genetics analyses and also combined historical and genetic data in a quantitative analysis using Approximate Bayesian Computation (ABC. There is no evidence of introgression from the north Zimbabwean population into the IAEA colony. Moreover, the ABC analyses revealed that the foundation and establishment of the colony was associated with a genetic bottleneck that has resulted in a loss of 35.7% of alleles and 54% of expected heterozygosity compared to its source population. Also, we show that tsetse control carried out in the 1990's is likely reduced the effective population size of the Kenya/Uganda border population.All the analyses indicate that the area of origin of the IAEA colony is the Kenya/Uganda border and that a genetic bottleneck was associated with the foundation and establishment of the colony. Genetic diversity associated with traits that are important for SIT may potentially have been lost during this genetic bottleneck which could lead to a suboptimal competitiveness of the colony males in the field. The genetic diversity of the colony is lower than that of field populations and so, studies using colony flies should be interpreted with caution when drawing general conclusions about G. pallidipes biology.
Rupani, Mihir Prafulbhai; Gaonkar, Narayan T; Bhatt, Gneyaa S
2016-10-01
In spite of continued efforts, India is still lagging behind in achieving its MDG goals. The objectives of this study were to identify stake-holders who have a role to play in childhood diarrhea management, to identify gaps in childhood diarrhea management and to propose strategic options for relieving these gaps. Bottleneck analysis exercise was carried out based on the Tanahashi model in six High Priority Districts (HPDs) of Gujarat in period between July-November 2013. The major bottlenecks identified for Childhood Diarrhea management were poor demand generation, unsafe drinking water, poor access to improved sanitation facility and lack of equitable distribution and replenishment mechanisms for Oral Rehydration Solution (ORS) packets and Zinc tablets till the front-line worker level. The main strategic options that were suggested for relieving these bottlenecks were Zinc-ORS roll out in scale-up districts, develop Information Education Communication/Behaviour Change Communication (IEC/BCC) plan for childhood diarrhea management at state/district level, use of Drug Logistics Information Management System (DLIMS) software for supply chain management of Zinc-ORS, strengthening of chlorination activity at household level, monitoring implementation of Nirmal Bharat Abhiyaan (NBA) for constructing improved sanitation facilities at household level and to develop an IEC/BCC plan for hygiene promotion and usage of sanitary latrines. Use of Zinc tablets need to be intensified through an effective scale-up. Adequate demand generation activity is needed. There is need to address safe drinking water and improved sanitation measures at household levels. Multi-sectoral engagements and ownership of Zinc-ORS program is the need of the hour. Copyright © 2016 Elsevier Ltd. All rights reserved.
Li, Ruipeng
2012-08-10
X-ray microbeam scattering is used to map the microstructure of the organic semiconductor along the channel length of solution-processed bottom-contact OFET devices. Contact-induced nucleation is known to influence the crystallization behavior within the channel. We find that microstructural inhomogeneities in the center of the channel act as a bottleneck to charge transport. This problem can be overcome by controlling crystallization of the preferable texture, thus favoring more efficient charge transport throughout the channel. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Supply Chain Management in The Brazilian Automobile Industry: Bottlenecks for Steadier Growth
Directory of Open Access Journals (Sweden)
W. F. Sorte Junior
2011-06-01
Full Text Available Taking the Lean Production System as the reference model, this paper analyses the supply chain management approach and the relationship between private and public sectors in the Brazilian automobile industry. Through a case study conducted from October 2006 to October 2008 in a private owned automaker, two bottlenecks in this Brazilian industrial sector are identified: (1 Emphasis on coordination rather than integration in supply chain management; and (2 Insufficient channels of communication between private and public sectors, resulting in inefficient policies to nurture automakers with low production volume.
2016-05-03
set Language HA LA ZU LLP hours 7.9 8.1 8.4 LM sentences 9861 11577 10644 LM words 93131 93328 60832 dictionary 5333 3856 14962 # tied states 1257 1453... monolingual NNs, and having a (large) hidden layer between Bottle-Neck and output layers – we need to change the 149 František Grézl and Martin...In both cases the monolingual SBN hierarchy with desired DNN topology is obtained. Lower WER was achieved by the first variant which uses the tied
Steiner tree heuristic in the Euclidean d-space using bottleneck distances
DEFF Research Database (Denmark)
Lorenzen, Stephan Sloth; Winter, Pawel
2016-01-01
Some of the most efficient heuristics for the Euclidean Steiner minimal tree problem in the d-dimensional space, d ≥2, use Delaunay tessellations and minimum spanning trees to determine small subsets of geometrically close terminals. Their low-cost Steiner trees are determined and concatenated...... in a greedy fashion to obtain a low cost tree spanning all terminals. The weakness of this approach is that obtained solutions are topologically related to minimum spanning trees. To avoid this and to obtain even better solutions, bottleneck distances are utilized to determine good subsets of terminals...
Curvature and bottlenecks control molecular transport in inverse bicontinuous cubic phases
Assenza, Salvatore; Mezzenga, Raffaele
2018-02-01
We perform a simulation study of the diffusion of small solutes in the confined domains imposed by inverse bicontinuous cubic phases for the primitive, diamond, and gyroid symmetries common to many lipid/water mesophase systems employed in experiments. For large diffusing domains, the long-time diffusion coefficient shows universal features when the size of the confining domain is renormalized by the Gaussian curvature of the triply periodic minimal surface. When bottlenecks are widely present, they become the most relevant factor for transport, regardless of the connectivity of the cubic phase.
Li, Ruipeng; Ward, Jeremy W.; Smilgies, Detlef Matthias; Payne, Marcia M.; Anthony, John Edward; Jurchescu, Oana D.; Amassian, Aram
2012-01-01
X-ray microbeam scattering is used to map the microstructure of the organic semiconductor along the channel length of solution-processed bottom-contact OFET devices. Contact-induced nucleation is known to influence the crystallization behavior within the channel. We find that microstructural inhomogeneities in the center of the channel act as a bottleneck to charge transport. This problem can be overcome by controlling crystallization of the preferable texture, thus favoring more efficient charge transport throughout the channel. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hybrid Bridge-Based Memetic Algorithms for Finding Bottlenecks in Complex Networks
DEFF Research Database (Denmark)
Chalupa, David; Hawick, Ken; Walker, James A
2018-01-01
We propose a memetic approach to find bottlenecks in complex networks based on searching for a graph partitioning with minimum conductance. Finding the optimum of this problem, also known in statistical mechanics as the Cheeger constant, is one of the most interesting NP-hard network optimisation...... as results for samples of social networks and protein–protein interaction networks. These indicate that both well-informed initial population generation and the use of a crossover seem beneficial in solving the problem in large-scale....
Implementation of density-based solver for all speeds in the framework of OpenFOAM
Shen, Chun; Sun, Fengxian; Xia, Xinlin
2014-10-01
In the framework of open source CFD code OpenFOAM, a density-based solver for all speeds flow field is developed. In this solver the preconditioned all speeds AUSM+(P) scheme is adopted and the dual time scheme is implemented to complete the unsteady process. Parallel computation could be implemented to accelerate the solving process. Different interface reconstruction algorithms are implemented, and their accuracy with respect to convection is compared. Three benchmark tests of lid-driven cavity flow, flow crossing over a bump, and flow over a forward-facing step are presented to show the accuracy of the AUSM+(P) solver for low-speed incompressible flow, transonic flow, and supersonic/hypersonic flow. Firstly, for the lid driven cavity flow, the computational results obtained by different interface reconstruction algorithms are compared. It is indicated that the one dimensional reconstruction scheme adopted in this solver possesses high accuracy and the solver developed in this paper can effectively catch the features of low incompressible flow. Then via the test cases regarding the flow crossing over bump and over forward step, the ability to capture characteristics of the transonic and supersonic/hypersonic flows are confirmed. The forward-facing step proves to be the most challenging for the preconditioned solvers with and without the dual time scheme. Nonetheless, the solvers described in this paper reproduce the main features of this flow, including the evolution of the initial transient.
Acceleration of FDTD mode solver by high-performance computing techniques.
Han, Lin; Xi, Yanping; Huang, Wei-Ping
2010-06-21
A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.
The impact of improved sparse linear solvers on industrial engineering applications
Energy Technology Data Exchange (ETDEWEB)
Heroux, M. [Cray Research, Inc., Eagan, MN (United States); Baddourah, M.; Poole, E.L.; Yang, Chao Wu
1996-12-31
There are usually many factors that ultimately determine the quality of computer simulation for engineering applications. Some of the most important are the quality of the analytical model and approximation scheme, the accuracy of the input data and the capability of the computing resources. However, in many engineering applications the characteristics of the sparse linear solver are the key factors in determining how complex a problem a given application code can solve. Therefore, the advent of a dramatically improved solver often brings with it dramatic improvements in our ability to do accurate and cost effective computer simulations. In this presentation we discuss the current status of sparse iterative and direct solvers in several key industrial CFD and structures codes, and show the impact that recent advances in linear solvers have made on both our ability to perform challenging simulations and the cost of those simulations. We also present some of the current challenges we have and the constraints we face in trying to improve these solvers. Finally, we discuss future requirements for sparse linear solvers on high performance architectures and try to indicate the opportunities that exist if we can develop even more improvements in linear solver capabilities.
International Nuclear Information System (INIS)
Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.
2015-01-01
It is well known that radiative transfer equation (RTE) provides more accurate tomographic results than its diffusion approximation (DA). However, RTE-based tomographic reconstruction codes have limited applicability in practice due to their high computational cost. In this article, we propose a new efficient method for solving the RTE forward problem with multiple light sources in an all-at-once manner instead of solving it for each source separately. To this end, we introduce here a novel linear solver called block biconjugate gradient stabilized method (block BiCGStab) that makes full use of the shared information between different right hand sides to accelerate solution convergence. Two parallelized block BiCGStab methods are proposed for additional acceleration under limited threads situation. We evaluate the performance of this algorithm with numerical simulation studies involving the Delta–Eddington approximation to the scattering phase function. The results show that the single threading block RTE solver proposed here reduces computation time by a factor of 1.5–3 as compared to the traditional sequential solution method and the parallel block solver by a factor of 1.5 as compared to the traditional parallel sequential method. This block linear solver is, moreover, independent of discretization schemes and preconditioners used; thus further acceleration and higher accuracy can be expected when combined with other existing discretization schemes or preconditioners. - Highlights: • We solve the multiple-right-hand-side problem in DOT with a block BiCGStab method. • We examine the CPU times of the block solver and the traditional sequential solver. • The block solver is faster than the sequential solver by a factor of 1.5–3.0. • Multi-threading block solvers give additional speedup under limited threads situation.
A parallel direct solver for the self-adaptive hp Finite Element Method
Paszyński, Maciej R.
2010-03-01
In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Joode, Jeroen de; Werven, Michiel van
2005-01-01
This paper analyses the potential bottlenecks that might emerge in the North-western European electricity supply system as a result of a number of (autonomous) long-term developments. The main long-term developments we identify are 1) a continuing increase in the demand for electricity, 2) a gradual shift from conventional electricity generation towards unconventional (green) generation, 3) a gradual shift from centralized generation towards decentralized generation and 4) a shift from national self-sufficient electricity supply systems towards a pan-European electricity system. Although it has been recognized that these developments might cause certain problems in some or more elements of the electricity supply chain, a coherent and comprehensive framework for the identification of these problems is lacking. More specific, governments and regulators seem to focus on certain parts of the electricity supply system separately, whereas certain interdependencies in the system have received relatively little attention. This paper presents such a framework and identifies some potential bottlenecks that receive relatively little attention from policy makers. These are 1) the increasing penetration of distributed generation, 2) an increasingly important role for demand response and 3) the lack of locational signals in the electricity supply system. The potential role of governments and markets in these issues is briefly explored. (Author)
Dynamics of the central bottleneck: dual-task and task uncertainty.
Directory of Open Access Journals (Sweden)
Mariano Sigman
2006-07-01
Full Text Available Why is the human brain fundamentally limited when attempting to execute two tasks at the same time or in close succession? Two classical paradigms, psychological refractory period (PRP and task switching, have independently approached this issue, making significant advances in our understanding of the architecture of cognition. Yet, there is an apparent contradiction between the conclusions derived from these two paradigms. The PRP paradigm, on the one hand, suggests that the simultaneous execution of two tasks is limited solely by a passive structural bottleneck in which the tasks are executed on a first-come, first-served basis. The task-switching paradigm, on the other hand, argues that switching back and forth between task configurations must be actively controlled by a central executive system (the system controlling voluntary, planned, and flexible action. Here we have explicitly designed an experiment mixing the essential ingredients of both paradigms: task uncertainty and task simultaneity. In addition to a central bottleneck, we obtain evidence for active processes of task setting (planning of the appropriate sequence of actions and task disengaging (suppression of the plan set for the first task in order to proceed with the next one. Our results clarify the chronometric relations between these central components of dual-task processing, and in particular whether they operate serially or in parallel. On this basis, we propose a hierarchical model of cognitive architecture that provides a synthesis of task-switching and PRP paradigms.
Modelling of lane-changing behaviour integrating with merging effect before a city road bottleneck
Lv, Wei; Song, Wei-guo; Fang, Zhi-ming; Ma, Jian
2013-10-01
Merging behaviour is a compulsive action in a discretionary lane-changing traffic system, especially in a system with a bottleneck. This paper aims to investigate the generic lane-changing behaviour considering the merging effect before a city road bottleneck. Thus firstly the merging behaviour is distinguished from other generic lane-changing behaviour. Combining discretionary lane-changing and compulsive merging, we developed an integrative traffic model, in which a method to calculate the lane-changing probability and the merging probability was proposed. A simulation scenario derived from real life was conducted to validate the proposed programming algorithm. Finally, a discussion on the simulation findings shows that the merging influence can be expanded and the merging behaviour can increase the probability of local traffic jamming in its affected area of the adjacent lane. The distribution of the merging distance provides fundamental insights for actual traffic management. The result of the clearance time implies the position of the incident point has a significant effect on the clearing time and it is important to ensure the end (exit) of the road is unimpeded in traffic evacuation.
Note: Inhibiting bottleneck corrosion in electrical calcium tests for ultra-barrier measurements
Energy Technology Data Exchange (ETDEWEB)
Nehm, F., E-mail: frederik.nehm@iapp.de; Müller-Meskamp, L.; Klumbies, H.; Leo, K. [Institut für Angewandte Photophysik, Technische Universität Dresden, George-Bähr-Straße 1, 01069 Dresden (Germany)
2015-12-15
A major failure mechanism is identified in electrical calcium corrosion tests for quality assessment of high-end application moisture barriers. Accelerated calcium corrosion is found at the calcium/electrode junction, leading to an electrical bottleneck. This causes test failure not related to overall calcium loss. The likely cause is a difference in electrochemical potential between the aluminum electrodes and the calcium sensor, resulting in a corrosion element. As a solution, a thin, full-area copper layer is introduced below the calcium, shifting the corrosion element to the calcium/copper junction and inhibiting bottleneck degradation. Using the copper layer improves the level of sensitivity for the water vapor transmission rate (WVTR) by over one order of magnitude. Thin-film encapsulated samples with 20 nm of atomic layer deposited alumina barriers this way exhibit WVTRs of 6 × 10{sup −5} g(H{sub 2}O)/m{sup 2}/d at 38 °C, 90% relative humidity.
Flagella-Driven Flows Circumvent Diffusive Bottlenecks that Inhibit Metabolite Exchange
Short, Martin; Solari, Cristian; Ganguly, Sujoy; Kessler, John; Goldstein, Raymond; Powers, Thomas
2006-03-01
The evolution of single cells to large and multicellular organisms requires matching the organisms' needs to the rate of exchange of metabolites with the environment. This logistic problem can be a severe constraint on development. For organisms with a body plan that approximates a spherical shell, such as colonies of the volvocine green algae, the required current of metabolites grows quadratically with colony radius whereas the rate at which diffusion can exchange metabolites grows only linearly with radius. Hence, there is a bottleneck radius beyond which the diffusive current cannot keep up with metabolic demands. Using Volvox carteri as a model organism, we examine experimentally and theoretically the role that advection of fluid by surface-mounted flagella plays in enhancing nutrient uptake. We show that fluid flow driven by the coordinated beating of flagella produces a convective boundary layer in the concentration of a diffusing solute which in turn renders the metabolite exchange rate quadratic in the colony radius. This enhanced transport circumvents the diffusive bottleneck, allowing increase in size and thus evolutionary transitions to multicellularity in the Volvocales.
Robust large-scale parallel nonlinear solvers for simulations.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any
DEFF Research Database (Denmark)
Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad
2013-01-01
In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...
Verification of continuum drift kinetic equation solvers in NIMROD
Energy Technology Data Exchange (ETDEWEB)
Held, E. D.; Ji, J.-Y. [Utah State University, Logan, Utah 84322-4415 (United States); Kruger, S. E. [Tech-X Corporation, Boulder, Colorado 80303 (United States); Belli, E. A. [General Atomics, San Diego, California 92186-5608 (United States); Lyons, B. C. [Program in Plasma Physics, Princeton University, Princeton, New Jersey 08543-0451 (United States)
2015-03-15
Verification of continuum solutions to the electron and ion drift kinetic equations (DKEs) in NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] is demonstrated through comparison with several neoclassical transport codes, most notably NEO [E. A. Belli and J. Candy, Plasma Phys. Controlled Fusion 54, 015015 (2012)]. The DKE solutions use NIMROD's spatial representation, 2D finite-elements in the poloidal plane and a 1D Fourier expansion in toroidal angle. For 2D velocity space, a novel 1D expansion in finite elements is applied for the pitch angle dependence and a collocation grid is used for the normalized speed coordinate. The full, linearized Coulomb collision operator is kept and shown to be important for obtaining quantitative results. Bootstrap currents, parallel ion flows, and radial particle and heat fluxes show quantitative agreement between NIMROD and NEO for a variety of tokamak equilibria. In addition, velocity space distribution function contours for ions and electrons show nearly identical detailed structure and agree quantitatively. A Θ-centered, implicit time discretization and a block-preconditioned, iterative linear algebra solver provide efficient electron and ion DKE solutions that ultimately will be used to obtain closures for NIMROD's evolving fluid model.
Shared memory parallelism for 3D cartesian discrete ordinates solver
International Nuclear Information System (INIS)
Moustafa, S.; Dutka-Malen, I.; Plagne, L.; Poncot, A.; Ramet, P.
2013-01-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multi-core + SIMD - Single Instruction on Multiple Data) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46*10 6 spatial cells and 1*10 12 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40.74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool. (authors)
Parallelization of elliptic solver for solving 1D Boussinesq model
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
Development and acceleration of unstructured mesh-based cfd solver
Emelyanov, V.; Karpenko, A.; Volkov, K.
2017-06-01
The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Advanced features of the fault tree solver FTREX
International Nuclear Information System (INIS)
Jung, Woo Sik; Han, Sang Hoon; Ha, Jae Joo
2005-01-01
This paper presents advanced features of a fault tree solver FTREX (Fault Tree Reliability Evaluation eXpert). Fault tree analysis is one of the most commonly used methods for the safety analysis of industrial systems especially for the probabilistic safety analysis (PSA) of nuclear power plants. Fault trees are solved by the classical Boolean algebra, conventional Binary Decision Diagram (BDD) algorithm, coherent BDD algorithm, and Bayesian networks. FTREX could optionally solve fault trees by the conventional BDD algorithm or the coherent BDD algorithm and could convert the fault trees into the form of the Bayesian networks. The algorithm based on the classical Boolean algebra solves a fault tree and generates MCSs. The conventional BDD algorithm generates a BDD structure of the top event and calculates the exact top event probability. The BDD structure is a factorized form of the prime implicants. The MCSs of the top event could be extracted by reducing the prime implicants in the BDD structure. The coherent BDD algorithm is developed to overcome the shortcomings of the conventional BDD algorithm such as the huge memory requirements and a long run time
Domain decomposition methods for core calculations using the MINOS solver
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2007-01-01
Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)
A generalized Poisson solver for first-principles device simulations
Energy Technology Data Exchange (ETDEWEB)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch [Nanoscale Simulations, ETH Zürich, 8093 Zürich (Switzerland); Brück, Sascha; Luisier, Mathieu [Integrated Systems Laboratory, ETH Zürich, 8092 Zürich (Switzerland)
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.
Parallelizable approximate solvers for recursions arising in preconditioning
Energy Technology Data Exchange (ETDEWEB)
Shapira, Y. [Israel Inst. of Technology, Haifa (Israel)
1996-12-31
For the recursions used in the Modified Incomplete LU (MILU) preconditioner, namely, the incomplete decomposition, forward elimination and back substitution processes, a parallelizable approximate solver is presented. The present analysis shows that the solutions of the recursions depend only weakly on their initial conditions and may be interpreted to indicate that the inexact solution is close, in some sense, to the exact one. The method is based on a domain decomposition approach, suitable for parallel implementations with message passing architectures. It requires a fixed number of communication steps per preconditioned iteration, independently of the number of subdomains or the size of the problem. The overlapping subdomains are either cubes (suitable for mesh-connected arrays of processors) or constructed by the data-flow rule of the recursions (suitable for line-connected arrays with possibly SIMD or vector processors). Numerical examples show that, in both cases, the overhead in the number of iterations required for convergence of the preconditioned iteration is small relatively to the speed-up gained.
Liu, Yang; Bagci, Hakan; Michielssen, Eric
2013-01-01
numbers of temporal and spatial basis functions discretizing the current [Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003]. In the past, serial versions of these solvers have been successfully applied to the analysis of scattering from
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.
2015-04-27
This paper describes a hybrid direct and iterative solver for two and three dimensional h adaptive grids with point singularities. The point singularities are eliminated by using a sequential linear computational cost solver O(N) on CPU [1]. The remaining Schur complements are submitted to incomplete LU preconditioned conjugated gradient (ILUPCG) iterative solver. The approach is compared to the standard algorithm performing static condensation over the entire mesh and executing the ILUPCG algorithm on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2], and the optimal refinements are selected using the projection based interpolation. The computational mesh is partitioned into sub-meshes with local point and edge singularities separated. This is done by using the following greedy algorithm.
Advanced field-solver techniques for RC extraction of integrated circuits
Yu, Wenjian
2014-01-01
Resistance and capacitance (RC) extraction is an essential step in modeling the interconnection wires and substrate coupling effect in nanometer-technology integrated circuits (IC). The field-solver techniques for RC extraction guarantee the accuracy of modeling, and are becoming increasingly important in meeting the demand for accurate modeling and simulation of VLSI designs. Advanced Field-Solver Techniques for RC Extraction of Integrated Circuits presents a systematic introduction to, and treatment of, the key field-solver methods for RC extraction of VLSI interconnects and substrate coupling in mixed-signal ICs. Various field-solver techniques are explained in detail, with real-world examples to illustrate the advantages and disadvantages of each algorithm. This book will benefit graduate students and researchers in the field of electrical and computer engineering, as well as engineers working in the IC design and design automation industries. Dr. Wenjian Yu is an Associate Professor at the Department of ...
FATCOP: A Fault Tolerant Condor-PVM Mixed Integer Program Solver
National Research Council Canada - National Science Library
Chen, Qun
1999-01-01
We describe FATCOP, a new parallel mixed integer program solver written in PVM. The implementation uses the Condor resource management system to provide a virtual machine composed of otherwise idle computers...
An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU
International Nuclear Information System (INIS)
Yoon, Jong Seon; Choi, Hyoung Gwon; Jeon, Byoung Jin
2017-01-01
The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.
Graph Grammar-Based Multi-Frontal Parallel Direct Solver for Two-Dimensional Isogeometric Analysis
Kuźnik, Krzysztof; Paszyński, Maciej; Calo, Victor M.
2012-01-01
at parent nodes and eliminates rows corresponding to fully assembled degrees of freedom. Finally, there are graph grammar productions responsible for root problem solution and recursive backward substitutions. Expressing the solver algorithm by graph grammar
GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase II
National Aeronautics and Space Administration — At the heart of scientific computing and numerical analysis are linear algebra solvers. In scientific computing, the focus is on the partial differential equations...
Tests of a 3D Self Magnetic Field Solver in the Finite Element Gun Code MICHELLE
Nelson, Eric M
2005-01-01
We have recently implemented a prototype 3d self magnetic field solver in the finite-element gun code MICHELLE. The new solver computes the magnetic vector potential on unstructured grids. The solver employs edge basis functions in the curl-curl formulation of the finite-element method. A novel current accumulation algorithm takes advantage of the unstructured grid particle tracker to produce a compatible source vector, for which the singular matrix equation is easily solved by the conjugate gradient method. We will present some test cases demonstrating the capabilities of the prototype 3d self magnetic field solver. One test case is self magnetic field in a square drift tube. Another is a relativistic axisymmetric beam freely expanding in a round pipe.
A distributed-memory hierarchical solver for general sparse linear systems
Energy Technology Data Exchange (ETDEWEB)
Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering
2017-12-20
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.
Energy Technology Data Exchange (ETDEWEB)
Park, Sun Ho [Korea Maritime and Ocean University, Busan (Korea, Republic of); Rhee, Shin Hyung [Seoul National University, Seoul (Korea, Republic of)
2015-08-15
Incompressible flow solvers are generally used for numerical analysis of cavitating flows, but with limitations in handling compressibility effects on vapor phase. To study compressibility effects on vapor phase and cavity interface, pressure-based incompressible and isothermal compressible flow solvers based on a cell-centered finite volume method were developed using the OpenFOAM libraries. To validate the solvers, cavitating flow around a hemispherical head-form body was simulated and validated against the experimental data. The cavity shedding behavior, length of a re-entrant jet, drag history, and the Strouhal number were compared between the two solvers. The results confirmed that computations of the cavitating flow including compressibility effects improved the reproduction of cavitation dynamics.
Wang, XiaoLiang; Li, JiaChun
2017-12-01
A new solver based on the high-resolution scheme with novel treatments of source terms and interface capture for the Savage-Hutter model is developed to simulate granular avalanche flows. The capability to simulate flow spread and deposit processes is verified through indoor experiments of a two-dimensional granular avalanche. Parameter studies show that reduction in bed friction enhances runout efficiency, and that lower earth pressure restraints enlarge the deposit spread. The April 9, 2000, Yigong avalanche in Tibet, China, is simulated as a case study by this new solver. The predicted results, including evolution process, deposit spread, and hazard impacts, generally agree with site observations. It is concluded that the new solver for the Savage-Hutter equation provides a comprehensive software platform for granular avalanche simulation at both experimental and field scales. In particular, the solver can be a valuable tool for providing necessary information for hazard forecasts, disaster mitigation, and countermeasure decisions in mountainous areas.
User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.
Reddy, C. J.
2000-01-01
PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.
Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines
Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.
2015-01-01
This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution
An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU
Energy Technology Data Exchange (ETDEWEB)
Yoon, Jong Seon; Choi, Hyoung Gwon [Seoul Nat’l Univ. of Science and Technology, Seoul (Korea, Republic of); Jeon, Byoung Jin [Yonsei Univ., Seoul (Korea, Republic of)
2017-02-15
The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.
Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.
2014-11-01
Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.
Thermal Loss of High-Q Antennas in Time Domain vs. Frequency Domain Solver
DEFF Research Database (Denmark)
Bahramzy, Pevand; Pedersen, Gert Frølund
2014-01-01
High-Q structures pose great challenges to their loss simulations in Time Domain Solvers (TDS). Therefore, in this work the thermal loss of high-Q antennas is calculated both in TDS and Frequency Domain Solver (FDS), which are then compared with each other and with the actual measurements....... The thermal loss calculation in FDS is shown to be more accurate for high-Q antennas....
Motivation, Challenge, and Opportunity of Successful Solvers on an Innovation Platform
DEFF Research Database (Denmark)
Hossain, Mokter
2017-01-01
. The main motivational factors of successful solvers engaged in problem solving are money, learning, fun, sense of achievement, passion, and networking. Major challenges solvers face include unclear or insufficient problem description, lack of option for communication, language barrier, time zone...... other experts, the ability to work in a diverse environment, options of work after retirement and from distant locations, and a new source of income....
DEFF Research Database (Denmark)
Hossain, Mokter
2018-01-01
. The main motivational factors of successful solvers engaged in problem solving are money, learning, fun, sense of achievement, passion, and networking. Major challenges solvers face include unclear or insufficient problem description, lack of option for communication, language barrier, time zone...... other experts, the ability to work in a diverse environment, options of work after retirement and from distant locations, and a new source of income....
Time-Contrastive Learning Based DNN Bottleneck Features for Text-Dependent Speaker Verification
DEFF Research Database (Denmark)
Sarkar, Achintya Kumar; Tan, Zheng-Hua
2017-01-01
In this paper, we present a time-contrastive learning (TCL) based bottleneck (BN) feature extraction method for speech signals with an application to text-dependent (TD) speaker verification (SV). It is well-known that speech signals exhibit quasi-stationary behavior in and only in a short interval......, and the TCL method aims to exploit this temporal structure. More specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting speech signals, in contrast to existing DNN based BN feature extraction methods that train DNNs using labeled data...... to discriminate speakers or pass-phrases or phones or a combination of them. In the context of speaker verification, speech data of fixed pass-phrases are used for TCL-BN training, while the pass-phrases used for TCL-BN training are excluded from being used for SV, so that the learned features can be considered...
Environment construction and bottleneck breakthrough in the improvement of wisdom exhibition
Zhang, Jiankang
2017-08-01
Wisdom exhibition is an inexorable trend in convention and exhibition industry in China. Information technology must be utilized by exhibition industry to achieve intelligent application and wisdom management, breaking the limitation of time as well as space, which raise the quality of exhibition service and level of operation to a totally new standard. Accordingly, exhibition industry should optimize mobile internet, a fundamental technology platform, during the advancing process of wisdom exhibition and consummate the combination among three plates including wisdom connection of information, wisdom exhibition environment and wisdom application of technology. Besides, the industry should realize the wisdom of external environment including wisdom of exhibition city, exhibition place, exhibition resource deal etc and break through bottle-neck in construction of wisdom exhibition industry, which includes construction of big data center, development of Mobile Internet application platform, promotion of information construction, innovative design of application scenarios.
Head-of-tide bottleneck of particulate material transport from watersheds to estuaries
Ensign, Scott H.; Noe, Gregory; Hupp, Cliff R.; Skalak, Katherine
2015-01-01
We measured rates of sediment, C, N, and P accumulation at four floodplain sites spanning the nontidal through oligohaline Choptank and Pocomoke Rivers, Maryland, USA. Ceramic tiles were used to collect sediment for a year and sediment cores were collected to derive decadal sedimentation rates using 137Cs. The results showed highest rates of short- and long-term sediment, C, N, and P accumulation occurred in tidal freshwater forests at the head of tide on the Choptank and the oligohaline marsh of the Pocomoke River, and lowest rates occurred in the downstream tidal freshwater forests in both rivers. Presumably, watershed material was mostly trapped at the head of tide, and estuarine material was trapped in oligohaline marshes. This hydrologic transport bottleneck at the head of tide stores most available watershed sediment, C, N, and P creating a sediment shadow in lower tidal freshwater forests potentially limiting their resilience to sea level rise.
Maternal age effect and severe germ-line bottleneck in the inheritance of human mitochondrial DNA
DEFF Research Database (Denmark)
Rebolledo-Jaramillo, Boris; Su, Marcia Shu-Wei; Stoler, Nicholas
2014-01-01
The manifestation of mitochondrial DNA (mtDNA) diseases depends on the frequency of heteroplasmy (the presence of several alleles in an individual), yet its transmission across generations cannot be readily predicted owing to a lack of data on the size of the mtDNA bottleneck during oogenesis......, an order of magnitude higher than for nuclear DNA. Notably, we found a positive association between the number of heteroplasmies in a child and maternal age at fertilization, likely attributable to oocyte aging. This study also took advantage of droplet digital PCR (ddPCR) to validate heteroplasmies...... and confirm a de novo mutation. Our results can be used to predict the transmission of disease-causing mtDNA variants and illuminate evolutionary dynamics of the mitochondrial genome....
Planque, Mélanie; Arnould, Thierry; Renard, Patricia; Delahaut, Philippe; Dieu, Marc; Gillard, Nathlie
2017-07-01
Food laboratories have developed methods for testing allergens in foods. The efficiency of qualitative and quantitative methods is of prime importance in protecting allergic populations. Unfortunately, food laboratories encounter barriers to developing efficient methods. Bottlenecks include the lack of regulatory thresholds, delays in the emergence of reference materials and guidelines, and the need to detect processed allergens. In this study, ultra-HPLC coupled to tandem MS was used to illustrate difficulties encountered in determining method performances. We measured the major influences of both processing and matrix effects on the detection of egg, milk, soy, and peanut allergens in foodstuffs. The main goals of this work were to identify difficulties that food laboratories still encounter in detecting and quantifying allergens and to sensitize researchers to them.
Dynamics-Based Stranded-Crowd Model for Evacuation in Building Bottlenecks
Directory of Open Access Journals (Sweden)
Lidi Huang
2013-01-01
Full Text Available In high-density public buildings, it is difficult to evacuate. So in this paper, we propose a novel quantitative evacuation model to insure people’s safety and reduce the risk of crowding. We analyze the mechanism of arch-like clogging phenomena during evacuation and the influencing factors in emergency situations at bottleneck passages; then we design a model based on crowd dynamics and apply the model to a stadium example. The example is used to compare evacuation results of crowd density with different egress widths in stranded zones. The results show this model proposed can guide the safe and dangerous egress widths in performance design and can help evacuation routes to be selected and optimized.
Topographic Steering of Enhanced Ice Flow at the Bottleneck Between East and West Antarctica
DEFF Research Database (Denmark)
Winter, Kate; Ross, Neil; Ferraccioli, Fausto
2018-01-01
Hypothesized drawdown of the East Antarctic Ice Sheet through the “bottleneck” zone between East and West Antarctica would have significant impacts for a large proportion of the Antarctic Ice Sheet. Earth observation satellite orbits and a sparseness of radio echo sounding data have restricted...... investigations of basal boundary controls on ice flow in this region until now. New airborne radio echo sounding surveys reveal complex topography of high relief beneath the southernmost Weddell/Ross ice divide, with three subglacial troughs connecting interior Antarctica to the Foundation and Patuxent Ice...... Streams and Siple Coast ice streams. These troughs route enhanced ice flow through the interior of Antarctica but limit potential drawdown of the East Antarctic Ice Sheet through the bottleneck zone. In a thinning or retreating scenario, these topographically controlled corridors of enhanced flow could...
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
From Bottleneck to Breakthrough: Urbanization and the Future of Biodiversity Conservation.
Sanderson, Eric W; Walston, Joseph; Robinson, John G
2018-06-01
For the first time in the Anthropocene, the global demographic and economic trends that have resulted in unprecedented destruction of the environment are now creating the necessary conditions for a possible renaissance of nature. Drawing reasonable inferences from current patterns, we can predict that 100 years from now, the Earth could be inhabited by between 6 and 8 billion people, with very few remaining in extreme poverty, most living in towns and cities, and nearly all participating in a technologically driven, interconnected market economy. Building on the scholarship of others in demography, economics, sociology, and conservation biology, here, we articulate a theory of social-environmental change that describes the simultaneous and interacting effects of urban lifestyles on fertility, poverty alleviation, and ideation. By recognizing the shifting dynamics of these macrodrivers, conservation practice has the potential to transform itself from a discipline managing declines ("bottleneck") to a transformative movement of recovery ("breakthrough").
Cahoon, Edgar B; Shockey, Jay M; Dietrich, Charles R; Gidda, Satinder K; Mullen, Robert T; Dyer, John M
2007-06-01
Oilseeds provide a unique platform for the production of high-value fatty acids that can replace non-sustainable petroleum and oceanic sources of specialty chemicals and aquaculture feed. However, recent efforts to engineer the seeds of crop and model plant species to produce new types of fatty acids, including hydroxy and conjugated fatty acids for industrial uses and long-chain omega-3 polyunsaturated fatty acids for farmed fish feed, have met with only modest success. The collective results from these studies point to metabolic 'bottlenecks' in the engineered plant seeds that substantially limit the efficient or selective flux of unusual fatty acids between different substrate pools and ultimately into storage triacylglycerol. Evidence is emerging that diacylglycerol acyltransferase 2, which catalyzes the final step in triacylglycerol assembly, is an important contributor to the synthesis of unusual fatty acid-containing oils, and is likely to be a key target for future oilseed metabolic engineering efforts.
Development of RBDGG Solver and Its Application to System Reliability Analysis
International Nuclear Information System (INIS)
Kim, Man Cheol
2010-01-01
For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis
Influence of an SN solver in a fine-mesh neutronics/thermal-hydraulics framework
International Nuclear Information System (INIS)
Jareteg, Klas; Vinai, Paolo; Demaziere, Christophe; Sasic, Srdjan
2015-01-01
In this paper a study on the influence of a neutron discrete ordinates (S N ) solver within a fine-mesh neutronic/thermal-hydraulic methodology is presented. The methodology consists of coupling a neutronic solver with a single-phase fluid solver, and it is aimed at computing the two fields on a three-dimensional (3D) sub-pin level. The cross-sections needed for the neutron transport equations are pre-generated using a Monte Carlo approach. The coupling is resolved in an iterative manner with full convergence of both fields. A conservative transfer of the full 3D information is achieved, allowing for a proper coupling between the neutronic and the thermal-hydraulic meshes on the finest calculated scales. The discrete ordinates solver is benchmarked against a Monte Carlo reference solution for a two-dimensional (2D) system. The results confirm the need of a high number of ordinates, giving a satisfactory accuracy in k eff and scalar flux profile applying S 16 for 16 energy groups. The coupled framework is used to compare the S N implementation and a solver based on the neutron diffusion approximation for a full 3D system of a quarter of a symmetric, 7x7 array in an infinite lattice setup. In this case, the impact of the discrete ordinates solver shows to be significant for the coupled system, as demonstrated in the calculations of the temperature distributions. (author)
Accelerated Cyclic Reduction: A Distributed-Memory Fast Solver for Structured Linear Systems
Chávez, Gustavo
2017-12-15
We present Accelerated Cyclic Reduction (ACR), a distributed-memory fast solver for rank-compressible block tridiagonal linear systems arising from the discretization of elliptic operators, developed here for three dimensions. Algorithmic synergies between Cyclic Reduction and hierarchical matrix arithmetic operations result in a solver that has O(kNlogN(logN+k2)) arithmetic complexity and O(k Nlog N) memory footprint, where N is the number of degrees of freedom and k is the rank of a block in the hierarchical approximation, and which exhibits substantial concurrency. We provide a baseline for performance and applicability by comparing with the multifrontal method with and without hierarchical semi-separable matrices, with algebraic multigrid and with the classic cyclic reduction method. Over a set of large-scale elliptic systems with features of nonsymmetry and indefiniteness, the robustness of the direct solvers extends beyond that of the multigrid solver, and relative to the multifrontal approach ACR has lower or comparable execution time and size of the factors, with substantially lower numerical ranks. ACR exhibits good strong and weak scaling in a distributed context and, as with any direct solver, is advantageous for problems that require the solution of multiple right-hand sides. Numerical experiments show that the rank k patterns are of O(1) for the Poisson equation and of O(n) for the indefinite Helmholtz equation. The solver is ideal in situations where low-accuracy solutions are sufficient, or otherwise as a preconditioner within an iterative method.
Accelerated Cyclic Reduction: A Distributed-Memory Fast Solver for Structured Linear Systems
Chá vez, Gustavo; Turkiyyah, George; Zampini, Stefano; Ltaief, Hatem; Keyes, David E.
2017-01-01
We present Accelerated Cyclic Reduction (ACR), a distributed-memory fast solver for rank-compressible block tridiagonal linear systems arising from the discretization of elliptic operators, developed here for three dimensions. Algorithmic synergies between Cyclic Reduction and hierarchical matrix arithmetic operations result in a solver that has O(kNlogN(logN+k2)) arithmetic complexity and O(k Nlog N) memory footprint, where N is the number of degrees of freedom and k is the rank of a block in the hierarchical approximation, and which exhibits substantial concurrency. We provide a baseline for performance and applicability by comparing with the multifrontal method with and without hierarchical semi-separable matrices, with algebraic multigrid and with the classic cyclic reduction method. Over a set of large-scale elliptic systems with features of nonsymmetry and indefiniteness, the robustness of the direct solvers extends beyond that of the multigrid solver, and relative to the multifrontal approach ACR has lower or comparable execution time and size of the factors, with substantially lower numerical ranks. ACR exhibits good strong and weak scaling in a distributed context and, as with any direct solver, is advantageous for problems that require the solution of multiple right-hand sides. Numerical experiments show that the rank k patterns are of O(1) for the Poisson equation and of O(n) for the indefinite Helmholtz equation. The solver is ideal in situations where low-accuracy solutions are sufficient, or otherwise as a preconditioner within an iterative method.
Fixation times in differentiation and evolution in the presence of bottlenecks, deserts, and oases.
Chou, Tom; Wang, Yu
2015-05-07
Cellular differentiation and evolution are stochastic processes that can involve multiple types (or states) of particles moving on a complex, high-dimensional state-space or "fitness" landscape. Cells of each specific type can thus be quantified by their population at a corresponding node within a network of states. Their dynamics across the state-space network involve genotypic or phenotypic transitions that can occur upon cell division, such as during symmetric or asymmetric cell differentiation, or upon spontaneous mutation. Here, we use a general multi-type branching processes to study first passage time statistics for a single cell to appear in a specific state. Our approach readily allows for nonexponentially distributed waiting times between transitions, reflecting, e.g., the cell cycle. For simplicity, we restrict most of our detailed analysis to exponentially distributed waiting times (Poisson processes). We present results for a sequential evolutionary process in which L successive transitions propel a population from a "wild-type" state to a given "terminally differentiated," "resistant," or "cancerous" state. Analytic and numeric results are also found for first passage times across an evolutionary chain containing a node with increased death or proliferation rate, representing a desert/bottleneck or an oasis. Processes involving cell proliferation are shown to be "nonlinear" (even though mean-field equations for the expected particle numbers are linear) resulting in first passage time statistics that depend on the position of the bottleneck or oasis. Our results highlight the sensitivity of stochastic measures to cell division fate and quantify the limitations of using certain approximations (such as the fixed-population and mean-field assumptions) in evaluating fixation times. Published by Elsevier Ltd.
Yamaguchi, Motonori; Logan, Gordon D; Li, Vanessa
2013-08-01
Does response selection select words or letters in skilled typewriting? Typing performance involves hierarchically organized control processes: an outer loop that controls word level processing, and an inner loop that controls letter (or keystroke) level processing. The present study addressed whether response selection occurs in the outer loop or the inner loop by using the psychological refractory period (PRP) paradigm in which Task1 required typing single words and Task2 required vocal responses to tones. The number of letters (string length) in the words was manipulated to discriminate selection of words from selection of keystrokes. In Experiment 1, the PRP effect depended on string length of words in Task1, suggesting that response selection occurs in the inner loop. To assess contributions of the outer loop, the influence of string length was examined in a lexical-decision task that also involves word encoding and lexical access (Experiment 2), or to-be-typed words were preexposed so outer-loop processing could finish before typing started (Experiment 3). Response time for Task2 (RT2) did not depend on string length with lexical decision, and RT2 still depended on string length with typing preexposed strings. These results support the inner-loop locus of the PRP effect. In Experiment 4, typing was performed as Task2, and the effect of string length on typing RT interacted with stimulus onset asynchrony superadditively, implying that another bottleneck also exists in the outer loop. We conclude that there are at least two bottleneck processes in skilled typewriting. 2013 APA, all rights reserved
Juvenile bottlenecks and salinity shape grey mullet assemblages in Mediterranean estuaries
Cardona, Luis; Hereu, Bernat; Torras, Xavier
2008-05-01
Previous research has suggested that competitive bottlenecks may exist for the Mediterranean grey mullets (Osteichthyes, Mugilidae) at the fry stage with the exotic Cyprinus carpio (Osteichthyes, Cyprinidae) playing a central role. As a consequence, the structure of grey mullet assemblages at later stages is thought to reflect previous competition as well as differences in osmoregulatory skills. This paper tests that hypothesis by examining four predictions about the relative abundance of five grey mullet species in 42 Western Mediterranean estuary sites from three areas (Aiguamolls de l'Empordà, Ebro Delta and Minorca) differing in the salinity level and occurrence of C. carpio. Field data confirmed the predictions as: (1) Liza aurata and Mugil cephalus were scarce everywhere and never dominated the assemblage; (2) Liza saliens dominated the assemblage where the salinity level was higher than 13; (3) Liza ramado always dominated the assemblage where the salinity level was lower than 13 and C. carpio was present; and (4) Chelon labrosus dominated the assemblage only where the salinity level was lower than 13 and C. carpio was absent. The catch per unit effort of C. labrosus of any size was smaller in the presence of C. carpio than where it had not been introduced, which is in agreement with the juvenile competitive bottleneck hypothesis. Discriminant analysis confirmed that the assemblage structure was linked to the salinity level and the occurrence of C. carpio for both early juveniles and late juveniles as well as adults. The data reported here reveal that the structure of grey mullet assemblages inhabiting Mediterranean estuaries is determined by salinity and competitive interactions at the fry stage.
Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali
2018-05-31
Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Deploy production sliding mesh capability with linear solver benchmarking.
Energy Technology Data Exchange (ETDEWEB)
Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thomas, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Overfelt, James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sprague, Mike [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rood, Jon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2018-02-01
overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.
A parallel solver for huge dense linear systems
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system
A RADIATION TRANSFER SOLVER FOR ATHENA USING SHORT CHARACTERISTICS
International Nuclear Information System (INIS)
Davis, Shane W.; Stone, James M.; Jiang Yanfei
2012-01-01
We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.
Using Python to Construct a Scalable Parallel Nonlinear Wave Solver
Mandli, Kyle
2011-01-01
Computational scientists seek to provide efficient, easy-to-use tools and frameworks that enable application scientists within a specific discipline to build and/or apply numerical models with up-to-date computing technologies that can be executed on all available computing systems. Although many tools could be useful for groups beyond a specific application, it is often difficult and time consuming to combine existing software, or to adapt it for a more general purpose. Python enables a high-level approach where a general framework can be supplemented with tools written for different fields and in different languages. This is particularly important when a large number of tools are necessary, as is the case for high performance scientific codes. This motivated our development of PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation, as a case-study for how Python can be used as a highlevel framework leveraging a multitude of codes, efficient both in the reuse of code and programmer productivity. We present scaling results for computations on up to four racks of Shaheen, an IBM BlueGene/P supercomputer at King Abdullah University of Science and Technology. One particularly important issue that PetClaw has faced is the overhead associated with dynamic loading leading to catastrophic scaling. We use the walla library to solve the issue which does so by supplanting high-cost filesystem calls with MPI operations at a low enough level that developers may avoid any changes to their codes.
Control of error and convergence in ODE solvers
International Nuclear Information System (INIS)
Gustafsson, K.
1992-03-01
Feedback is a general principle that can be used in many different contexts. In this thesis it is applied to numerical integration of ordinary differential equations. An advanced integration method includes parameters and variables that should be adjusted during the execution. In addition, the integration method should be able to automatically handle situations such as: initialization, restart after failures, etc. In this thesis we regard the algorithms for parameter adjustment and supervision as a controller. The controlled measures different variable that tell the current status of the integration, and based on this information it decides how to continue. The design of the controller is vital in order to accurately and efficiently solve a large class of ordinary differential equations. The application of feedback control may appear farfetched, but numerical integration methods are in fact dynamical systems. This is often overlooked in traditional numerical analysis. We derive dynamic models that describe the behavior of the integration method as well as the standard control algorithms in use today. Using these models it is possible to analyze properties of current algorithms, and also explain some generally observed misbehaviors. Further, we use the acquired insight to derive new and improved control algorithms, both for explicit and implicit Runge-Kutta methods. In the explicit case, the new controller gives good overall performance. In particular it overcomes the problem with oscillating stepsize sequence that is often experienced when the stepsize is restricted by numerical stability. The controller for implicit methods is designed so that it tracks changes in the differential equation better than current algorithms. In addition, it includes a new strategy for the equation solver, which allows the stepsize to vary more freely. This leads to smoother error control without excessive operations on the iteration matrix. (87 refs.) (au)
A multi-solver quasi-Newton method for the partitioned simulation of fluid-structure interaction
International Nuclear Information System (INIS)
Degroote, J; Annerel, S; Vierendeels, J
2010-01-01
In partitioned fluid-structure interaction simulations, the flow equations and the structural equations are solved separately. Consequently, the stresses and displacements on both sides of the fluid-structure interface are not automatically in equilibrium. Coupling techniques like Aitken relaxation and the Interface Block Quasi-Newton method with approximate Jacobians from Least-Squares models (IBQN-LS) enforce this equilibrium, even with black-box solvers. However, all existing coupling techniques use only one flow solver and one structural solver. To benefit from the large number of multi-core processors in modern clusters, a new Multi-Solver Interface Block Quasi-Newton (MS-IBQN-LS) algorithm has been developed. This algorithm uses more than one flow solver and structural solver, each running in parallel on a number of cores. One-dimensional and three-dimensional numerical experiments demonstrate that the run time of a simulation decreases as the number of solvers increases, albeit at a slower pace. Hence, the presented multi-solver algorithm accelerates fluid-structure interaction calculations by increasing the number of solvers, especially when the run time does not decrease further if more cores are used per solver.
Santos, Eduarda M; Hamilton, Patrick B; Coe, Tobias S; Ball, Jonathan S; Cook, Alastair C; Katsiadaki, Ioanna; Tyler, Charles R
2013-10-15
Pollution is a significant environmental pressure on fish populations in both freshwater and marine environments. Populations subjected to chronic exposure to pollutants can experience impacts ranging from altered reproductive capacity to changes in population genetic structure. Few studies, however, have examined the reproductive vigor of individuals within populations inhabiting environments characterized by chronic pollution. In this study we undertook an analysis of populations of three-spined sticklebacks (Gasterosteus aculeatus) from polluted sites, to determine levels of genetic diversity, assess for evidence of historic population genetic bottlenecks and determine the reproductive competitiveness of males from these locations. The sites chosen included locations in the River Aire, the River Tees and the River Birket, English rivers that have been impacted by pollution from industrial and/or domestic effluents for over 100 years. Male reproductive competitiveness was determined via competitive breeding experiments with males and females derived from a clean water site, employing DNA microsatellites to determine parentage outcome. Populations of stickleback collected from the three historically polluted sites showed evidence of recent population bottlenecks, although only the River Aire population showed low genetic diversity. In contrast, fish collected from two relatively unpolluted sites within the River Gowy and Houghton Springs showed weak, or no evidence of such bottlenecks. Nevertheless, males derived from polluted sites were able to reproduce successfully in competition with males derived from clean water exposures, indicating that these bottlenecks have not resulted in any substantial loss of reproductive fitness in males. Copyright © 2013 Elsevier B.V. All rights reserved.
Pedersen, Casper-Emil T; Lohmueller, Kirk E; Grarup, Niels; Bjerregaard, Peter; Hansen, Torben; Siegismund, Hans R; Moltke, Ida; Albrechtsen, Anders
2017-02-01
The genetic consequences of population bottlenecks on patterns of deleterious genetic variation in human populations are of tremendous interest. Based on exome sequencing of 18 Greenlandic Inuit we show that the Inuit have undergone a severe ∼20,000-year-long bottleneck. This has led to a markedly more extreme distribution of allele frequencies than seen for any other human population tested to date, making the Inuit the perfect population for investigating the effect of a bottleneck on patterns of deleterious variation. When comparing proxies for genetic load that assume an additive effect of deleterious alleles, the Inuit show, at most, a slight increase in load compared to European, East Asian, and African populations. Specifically, we observe Inuit. In contrast, proxies for genetic load under a recessive model suggest that the Inuit have a significantly higher load (20% increase or more) compared to other less bottlenecked human populations. Forward simulations under realistic models of demography support our empirical findings, showing up to a 6% increase in the genetic load for the Inuit population across all models of dominance. Further, the Inuit population carries fewer deleterious variants than other human populations, but those that are present tend to be at higher frequency than in other populations. Overall, our results show how recent demographic history has affected patterns of deleterious variants in human populations. Copyright © 2017 by the Genetics Society of America.
s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Mike [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Straalen, Brian Van [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carson, Erin [Univ. of California, Berkeley, CA (United States); Knight, Nicholas [Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)
2014-08-14
Geometric multigrid solvers within adaptive mesh refinement (AMR) applications often reach a point where further coarsening of the grid becomes impractical as individual sub domain sizes approach unity. At this point the most common solution is to use a bottom solver, such as BiCGStab, to reduce the residual by a fixed factor at the coarsest level. Each iteration of BiCGStab requires multiple global reductions (MPI collectives). As the number of BiCGStab iterations required for convergence grows with problem size, and the time for each collective operation increases with machine scale, bottom solves in large-scale applications can constitute a significant fraction of the overall multigrid solve time. In this paper, we implement, evaluate, and optimize a communication-avoiding s-step formulation of BiCGStab (CABiCGStab for short) as a high-performance, distributed-memory bottom solver for geometric multigrid solvers. This is the first time s-step Krylov subspace methods have been leveraged to improve multigrid bottom solver performance. We use a synthetic benchmark for detailed analysis and integrate the best implementation into BoxLib in order to evaluate the benefit of a s-step Krylov subspace method on the multigrid solves found in the applications LMC and Nyx on up to 32,768 cores on the Cray XE6 at NERSC. Overall, we see bottom solver improvements of up to 4.2x on synthetic problems and up to 2.7x in real applications. This results in as much as a 1.5x improvement in solver performance in real applications.
Directory of Open Access Journals (Sweden)
Alice Mühlroth
2013-11-01
Full Text Available The importance of n-3 long chain polyunsaturated fatty acids (LC-PUFAs for human health has received more focus the last decades, and the global consumption of n-3 LC-PUFA has increased. Seafood, the natural n-3 LC-PUFA source, is harvested beyond a sustainable capacity, and it is therefore imperative to develop alternative n-3 LC-PUFA sources for both eicosapentaenoic acid (EPA, 20:5n-3 and docosahexaenoic acid (DHA, 22:6n-3. Genera of algae such as Nannochloropsis, Schizochytrium, Isochrysis and Phaedactylum within the kingdom Chromista have received attention due to their ability to produce n-3 LC-PUFAs. Knowledge of LC-PUFA synthesis and its regulation in algae at the molecular level is fragmentary and represents a bottleneck for attempts to enhance the n-3 LC-PUFA levels for industrial production. In the present review, Phaeodactylum tricornutum has been used to exemplify the synthesis and compartmentalization of n-3 LC-PUFAs. Based on recent transcriptome data a co-expression network of 106 genes involved in lipid metabolism has been created. Together with recent molecular biological and metabolic studies, a model pathway for n-3 LC-PUFA synthesis in P. tricornutum has been proposed, and is compared to industrialized species of Chromista. Limitations of the n-3 LC-PUFA synthesis by enzymes such as thioesterases, elongases, acyl-CoA synthetases and acyltransferases are discussed and metabolic bottlenecks are hypothesized such as the supply of the acetyl-CoA and NADPH. A future industrialization will depend on optimization of chemical compositions and increased biomass production, which can be achieved by exploitation of the physiological potential, by selective breeding and by genetic engineering.
Directory of Open Access Journals (Sweden)
Yingni Zhai
2014-10-01
Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
International Nuclear Information System (INIS)
Van Werven, M.J.N.; De Joode, J.; Scheepers, M.J.J.
2006-02-01
It is uncertain how the electricity system in Europe, and in particular northwest Europe and the Netherlands, will develop in the next fifteen years. The main objective of this report is to identify possible bottlenecks that may hamper the northwest European electricity system to develop into an optimal system in the long term (until 2020). Subsequently, based on the identified bottlenecks, the report attempts to indicate relevant market response and policy options. To be able to identify possible bottlenecks in the development to an optimal electricity system, an analytical framework has been set up with the aim to identify possible (future) problems in a structured way. The segments generation, network, demand, balancing, and policy and regulation are analysed, as well as the interactions between these segments. Each identified bottleneck is assessed on the criteria reliability, sustainability and affordability. Three bottlenecks are analysed in more detail: (1) The increasing penetration of distributed generation (DG) and its interaction with the electricity network. Dutch policy could be aimed at: (a) Gaining more insight in the costs and benefits that result from the increasing penetration of DG; (b) Creating possibilities for DSOs to experiment with innovative (network management) concepts; (c) Introducing locational signals; and (d) Further analyse the possibility of ownership unbundling; (2) The problem of intermittency and its implications for balancing the electricity system. Dutch policy could be aimed at: (a) Creating the environment in which the market is able to respond in an efficient way; (b) Monitoring market responses; (c) Market coupling; and (d) Discussing the timing of the gate closure; and (3) Interconnection and congestion issues in combination with generation. Dutch policy could be aimed at: (a) Using the existing interconnection capacity as efficient as possible; (b) Identifying the causes behind price differences; and (c) Harmonise market
Directory of Open Access Journals (Sweden)
A. K. Thiruvenkadan
2014-09-01
Full Text Available Aim: The present study was undertaken in Salem Black goat population for genetic analysis at molecular level to exploit the breed for planning sustainable improvement, conservation and utilization, which subsequently can improve the livelihood of its stakeholders. Materials and Methods: Genomic DNA was isolated from blood samples of 50 unrelated Salem Black goats with typical phenotypic features in several villages in the breeding tract and the genetic characterization and bottleneck analysis in Salem Black goat was done using 25 microsatellite markers as recommended by the Food and Agricultural Organization, Rome, Italy. The basic measures of genetic variation were computed using bioinformatic software. To evaluate the Salem Black goats for mutation drift equilibrium, three tests were performed under three different mutation models, viz., infinite allele model (IAM, stepwise mutation model (SMM and two-phase model (TPM and the observed gene diversity (He and expected equilibrium gene diversity (Heq were estimated under different models of microsatellite evolution. Results: The study revealed that the observed number of alleles ranged from 4 (ETH10, ILSTS008 to 17 (BM64444 with a total of 213 alleles and mean of 10.14±0.83 alleles across loci. The overall observed heterozygosity, expected heterozygosity, inbreeding estimate and polymorphism information content values were 0.631±0.041, 0.820±0.024, 0.233±0.044 and 0.786±0.023 respectively indicating high genetic diversity. The average observed gene diversities (He pooled over different markers was 0.829±0.024 and the average expected gene diversities under IAM, TPM and SMM models were 0.769±0.026, 0.808±0.024 and 0.837±0.020 respectively. The number of loci found to exhibit gene diversity excess under IAM, TPM and SMM models were 18, 17 and 12 respectively. Conclusion: All the three statistical tests, viz., sign test, standardized differences test and Wilcoxon sign rank test, revealed
Balancing Energy and Performance in Dense Linear System Solvers for Hybrid ARM+GPU platforms
Directory of Open Access Journals (Sweden)
Juan P. Silva
2016-04-01
Full Text Available The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.
A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU
Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha
2018-03-01
Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.
A comparison of viscous-plastic sea ice solvers with and without replacement pressure
Kimmritz, Madlen; Losch, Martin; Danilov, Sergey
2017-07-01
Recent developments of the explicit elastic-viscous-plastic (EVP) solvers call for a new comparison with implicit solvers for the equations of viscous-plastic sea ice dynamics. In Arctic sea ice simulations, the modified and the adaptive EVP solvers, and the implicit Jacobian-free Newton-Krylov (JFNK) solver are compared against each other. The adaptive EVP method shows convergence rates that are generally similar or even better than those of the modified EVP method, but the convergence of the EVP methods is found to depend dramatically on the use of the replacement pressure (RP). Apparently, using the RP can affect the pseudo-elastic waves in the EVP methods by introducing extra non-physical oscillations so that, in the extreme case, convergence to the VP solution can be lost altogether. The JFNK solver also suffers from higher failure rates with RP implying that with RP the momentum equations are stiffer and more difficult to solve. For practical purposes, both EVP methods can be used efficiently with an unexpectedly low number of sub-cycling steps without compromising the solutions. The differences between the RP solutions and the NoRP solutions (when the RP is not being used) can be reduced with lower thresholds of viscous regularization at the cost of increasing stiffness of the equations, and hence the computational costs of solving them.
Neutron capture cross section of $^{90}$Zr Bottleneck in the s-process reaction flow
Tagliente, G; Milazzo, P M; Moreau, C; Aerts, G; Abbondanno, U; Alvarez, H; Alvarez-Velarde, F; Andriamonje, Samuel A; Andrzejewski, J; Assimakopoulos, Panayiotis; Audouin, L; Badurek, G; Baumann, P; Bečvář, F; Berthoumieux, E; Bisterzo, S; Calviño, F; Calviani, M; Cano-Ott, D; Capote, R; Carrapiço, C; Cennini, P; Chepel, V; Chiaveri, Enrico; Colonna, N; Cortés, G; Couture, A; Cox, J; Dahlfors, M; David, S; Dillman, I; Domingo-Pardo, C; Dridi, W; Durán, I; Eleftheriadis, C; Embid-Segura, M; Ferrant, L; Ferrari, A; Ferreira-Marques, R; Furman, W; Gallino, R; Gonçalves, I; Gonzalez-Romero, E; Gramegna, F; Guerrero, C; Gunsing, F; Haas, B; Haight, R; Heil, M; Herrera-Martínez, A; Igashira, M; Jericha, E; Käppeler, F; Kadi, Y; Karadimos, D; Karamanis, D; Kerveno, M; Köhler, P; Kossionides, E; Krtička, M; Lamboudis, C; Leeb, H; Lindote, A; Lopes, I; Lozano, M; Lukic, S; Marganiec, J; Marrone, S; Martínez, T; Massimi, C; Mastinu, P; Mengoni, A; Mosconi, M; Neves, F; Oberhummer, Heinz; O'Brien, S; Pancin, J; Papachristodoulou, C; Papadopoulos, C; Paradela, C; Patronis, N; Pavlik, A; Pavlopoulos, P; Perrot, L; Pigni, M T; Plag, R; Plompen, A; Plukis, A; Poch, A; Praena, J; Pretel, C; Quesada, J; Rauscher, T; Reifarth, R; Rubbia, Carlo; Rudolf, G; Rullhusen, P; Salgado, J; Santos, J; Sarchiapone, L; Savvidis, I; Stéphan, C; Taín, J L; Tassan-Got, L; Tavora, L; Terlizzi, R; Vannini, G; Vaz, P; Ventura, A; Villamarín, D; Vincente, M, C; Vlachoudis, V; Vlastou, R; Voss, F; Walter, S; Wendler, H; Wiescher, M; Wisshak, K
2008-01-01
The neutron capture cross sections of the Zr isotopes have important implications in nuclear astrophysics and for reactor design. The small cross section of the neutron magic nucleus 90Zr, which accounts for more than 50% of natural zirconium represents one of the key isotopes for the stellar s-process, because it acts as a bottleneck in the neutron capture chain between the Fe seed and the heavier isotopes. The same element, Zr, also is an important component of the structural materials used in traditional and advanced nuclear reactors. The (n,γ) cross section has been measured at CERN, using the n_TOF spallation neutron source. In total, 45 resonances could be resolved in the neutron energy range below 70 keV, 10 being observed for the first time thanks to the high resolution and low backgrounds at n_TOF. On average, the Γγ widths obtained in resonance analyses with the R-matrix code SAMMY were 15% smaller than reported previously. By these results, the accuracy of the Maxwellian averaged cross section f...
Purine biosynthesis is the bottleneck in trimethoprim-treated Bacillus subtilis.
Stepanek, Jennifer Janina; Schäkermann, Sina; Wenzel, Michaela; Prochnow, Pascal; Bandow, Julia Elisabeth
2016-10-01
Trimethoprim is a folate biosynthesis inhibitor. Tetrahydrofolates are essential for the transfer of C 1 units in several biochemical pathways including purine, thymine, methionine, and glycine biosynthesis. This study addressed the effects of folate biosynthesis inhibition on bacterial physiology. Two complementary proteomic approaches were employed to analyze the response of Bacillus subtilis to trimethoprim. Acute changes in protein synthesis rates were monitored by radioactive pulse labeling of newly synthesized proteins and subsequent 2DE analysis. Changes in protein levels were detected using gel-free quantitative MS. Proteins involved in purine and histidine biosynthesis, the σ B -dependent general stress response, and sporulation were upregulated. Most prominently, the PurR-regulon required for de novo purine biosynthesis was derepressed indicating purine depletion. The general stress response was activated energy dependently and in a subpopulation of treated cultures an early onset of sporulation was observed, most likely triggered by low guanosine triphosphate levels. Supplementation of adenosine triphosphate, adenosine, and guanosine to the medium substantially decreased antibacterial activity, showing that purine depletion becomes the bottleneck in trimethoprim-treated B. subtilis. The frequently prescribed antibiotic trimethoprim causes purine depletion in B. subtilis, which can be complemented by supplementing purines to the medium. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Task-set inertia and memory-consolidation bottleneck in dual tasks.
Koch, Iring; Rumiati, Raffaella I
2006-11-01
Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.
Variable Speed Limits: Strategies to Improve Safety and Traffic Parameters for a Bottleneck
Directory of Open Access Journals (Sweden)
M. Z. Hasanpour
2017-04-01
Full Text Available The primary purpose of the speed limit system is to enforce reasonable and safe speed. To reduce secondary problems such as accidents and queuing, Variable Speed Limits (VSL has been suggested. In this paper VSL is used to better safety and traffic parameters. Traffic parameters including speed, queue length and stopping time have been pondering. For VLS, an optimization decision tree algorithm with the function of microscopic simulation was used. The results in case of sub saturated, saturated and supersaturated at a bottleneck are examined and compared with the Allaby logic tree. The results show that the proposed decision tree shows an improved performance in terms of safety and comfort along the highway. The VSL pilot project is part of the Road Safety Improvement Program included in Iran’s road safety action plan that is in the research process in the BHRC Research Institute, Road and Housing & Urban Development Research that is planned for next 10-year Transportation safety view Plan.
Directory of Open Access Journals (Sweden)
Max Schelker
2016-10-01
Full Text Available After endocytic uptake, influenza viruses transit early endosomal compartments and eventually reach late endosomes. There, the viral glycoprotein hemagglutinin (HA triggers fusion between endosomal and viral membrane, a critical step that leads to release of the viral segmented genome destined to reach the cell nucleus. Endosomal maturation is a complex process involving acidification of the endosomal lumen as well as endosome motility along microtubules. While the pH drop is clearly critical for the conformational change and membrane fusion activity of HA, the effect of intracellular transport dynamics on the progress of infection remains largely unclear. In this study, we developed a comprehensive mathematical model accounting for the first steps of influenza virus infection. We calibrated our model with experimental data and challenged its predictions using recombinant viruses with altered pH sensitivity of HA. We identified the time point of virus-endosome fusion and thereby the diffusion distance of the released viral genome to the nucleus as a critical bottleneck for efficient virus infection. Further, we concluded and supported experimentally that the viral RNA is subjected to cytosolic degradation strongly limiting the probability of a successful genome import into the nucleus.
Austin, A.; Ballare, C. L.; Méndez, M. S.
2015-12-01
Plant litter decomposition is an essential process in the first stages of carbon and nutrient turnover in terrestrial ecosystems, and together with soil microbial biomass, provide the principal inputs of carbon for the formation of soil organic matter. Photodegradation, the photochemical mineralization of organic matter, has been recently identified as a mechanism for previously unexplained high rates of litter mass loss in low rainfall ecosystems; however, the generality of this process as a control on carbon cycling in terrestrial ecosystems is not known, and the indirect effects of photodegradation on biotic stimulation of carbon turnover have been debated in recent studies. We demonstrate that in a wide range of plant species, previous exposure to solar radiation, and visible light in particular, enhanced subsequent biotic degradation of leaf litter. Moreover, we demonstrate that the mechanism for this enhancement involves increased accessibility for microbial enzymes to plant litter carbohydrates due to a reduction in lignin content. Photodegradation of plant litter reduces the structural and chemical bottleneck imposed by lignin in secondary cell walls. In litter from woody plant species, specific interactions with ultraviolet radiation obscured facilitative effects of solar radiation on biotic decomposition. The generalized positive effect of solar radiation exposure on subsequent microbial activity is mediated by increased accessibility to cell wall polysaccharides, which suggests that photodegradation is quantitatively important in determining rates of mass loss, nutrient release and the carbon balance in a broad range of terrestrial ecosystems.
End-Devonian extinction and a bottleneck in the early evolution of modern jawed vertebrates.
Sallan, Lauren Cole; Coates, Michael I
2010-06-01
The Devonian marks a critical stage in the early evolution of vertebrates: It opens with an unprecedented diversity of fishes and closes with the earliest evidence of limbed tetrapods. However, the latter part of the Devonian has also been characterized as a period of global biotic crisis marked by two large extinction pulses: a "Big Five" mass extinction event at the Frasnian-Famennian stage boundary (374 Ma) and the less well-documented Hangenberg event some 15 million years later at the Devonian-Carboniferous boundary (359 Ma). Here, we report the results of a wide-ranging analysis of the impact of these events on early vertebrate evolution, which was obtained from a database of vertebrate occurrences sampling over 1,250 taxa from 66 localities spanning Givetian to Serpukhovian stages (391 to 318 Ma). We show that major vertebrate clades suffered acute and systematic effects centered on the Hangenberg extinction involving long-term losses of over 50% of diversity and the restructuring of vertebrate ecosystems worldwide. Marine and nonmarine faunas were equally affected, precluding the existence of environmental refugia. The subsequent recovery of previously diverse groups (including placoderms, sarcopterygian fish, and acanthodians) was minimal. Tetrapods, actinopterygians, and chondrichthyans, all scarce within the Devonian, undergo large diversification events in the aftermath of the extinction, dominating all subsequent faunas. The Hangenberg event represents a previously unrecognized bottleneck in the evolutionary history of vertebrates as a whole and a historical contingency that shaped the roots of modern biodiversity.
Only adding stationary storage to vaccine supply chains may create and worsen transport bottlenecks.
Haidari, Leila A; Connor, Diana L; Wateska, Angela R; Brown, Shawn T; Mueller, Leslie E; Norman, Bryan A; Schmitz, Michelle M; Paul, Proma; Rajgopal, Jayant; Welling, Joel S; Leonard, Jim; Claypool, Erin G; Weng, Yu-Ting; Chen, Sheng-I; Lee, Bruce Y
2013-01-01
Although vaccine supply chains in many countries require additional stationary storage and transport capacity to meet current and future needs, international donors tend to donate stationary storage devices far more often than transport equipment. To investigate the impact of only adding stationary storage equipment on the capacity requirements of transport devices and vehicles, we used HERMES (Highly Extensible Resource for Modeling Supply Chains) to construct a discrete event simulation model of the Niger vaccine supply chain. We measured the transport capacity requirement for each mode of transport used in the Niger vaccine cold chain, both before and after adding cold rooms and refrigerators to relieve all stationary storage constraints in the system. With the addition of necessary stationary storage, the average transport capacity requirement increased from 88% to 144% for cold trucks, from 101% to 197% for pickup trucks, and from 366% to 420% for vaccine carriers. Therefore, adding stationary storage alone may worsen or create new transport bottlenecks as more vaccines flow through the system, preventing many vaccines from reaching their target populations. Dynamic modeling can reveal such relationships between stationary storage capacity and transport constraints.
Lim, Chee Han; Voedisch, Sabrina; Wahl, Benjamin; Rouf, Syed Fazle; Geffers, Robert; Rhen, Mikael; Pabst, Oliver
2014-07-01
Vaccination represents an important instrument to control typhoid fever in humans and protects mice from lethal infection with mouse pathogenic serovars of Salmonella species. Mixed infections with tagged Salmonella can be used in combination with probabilistic models to describe the dynamics of the infection process. Here we used mixed oral infections with tagged Salmonella strains to identify bottlenecks in the infection process in naïve and vaccinated mice. We established a next generation sequencing based method to characterize the composition of tagged Salmonella strains which offers a fast and reliable method to characterise the composition of genome-tagged Salmonella strains. We show that initial colonization of Salmonella was distinguished by a non-Darwinian selection of few bacteria setting up the infection independently in gut associated lymphoid tissue and systemic compartments. Colonization of Peyer's patches fuels the sustained spread of bacteria into mesenteric lymph nodes via dendritic cells. In contrast, infection of liver and spleen originated from an independent pool of bacteria. Vaccination only moderately reduced invasion of Peyer's patches but potently uncoupled bacterial populations present in different systemic compartments. Our data indicate that vaccination differentially skews the capacity of Salmonella to colonize systemic and gut immune compartments and provide a framework for the further dissection of infection dynamics.
Directory of Open Access Journals (Sweden)
Chee Han Lim
2014-07-01
Full Text Available Vaccination represents an important instrument to control typhoid fever in humans and protects mice from lethal infection with mouse pathogenic serovars of Salmonella species. Mixed infections with tagged Salmonella can be used in combination with probabilistic models to describe the dynamics of the infection process. Here we used mixed oral infections with tagged Salmonella strains to identify bottlenecks in the infection process in naïve and vaccinated mice. We established a next generation sequencing based method to characterize the composition of tagged Salmonella strains which offers a fast and reliable method to characterise the composition of genome-tagged Salmonella strains. We show that initial colonization of Salmonella was distinguished by a non-Darwinian selection of few bacteria setting up the infection independently in gut associated lymphoid tissue and systemic compartments. Colonization of Peyer's patches fuels the sustained spread of bacteria into mesenteric lymph nodes via dendritic cells. In contrast, infection of liver and spleen originated from an independent pool of bacteria. Vaccination only moderately reduced invasion of Peyer's patches but potently uncoupled bacterial populations present in different systemic compartments. Our data indicate that vaccination differentially skews the capacity of Salmonella to colonize systemic and gut immune compartments and provide a framework for the further dissection of infection dynamics.
Nicholson, Arwen E.; Wilkinson, David M.; Williams, Hywel T. P.; Lenton, Timothy M.
2018-06-01
The search for habitable exoplanets inspires the question - how do habitable planets form? Planet habitability models traditionally focus on abiotic processes and neglect a biotic response to changing conditions on an inhabited planet. The Gaia hypothesis postulates that life influences the Earth's feedback mechanisms to form a self-regulating system, and hence that life can maintain habitable conditions on its host planet. If life has a strong influence, it will have a role in determining a planet's habitability over time. We present the ExoGaia model - a model of simple `planets' host to evolving microbial biospheres. Microbes interact with their host planet via consumption and excretion of atmospheric chemicals. Model planets orbit a `star' that provides incoming radiation, and atmospheric chemicals have either an albedo or a heat-trapping property. Planetary temperatures can therefore be altered by microbes via their metabolisms. We seed multiple model planets with life while their atmospheres are still forming and find that the microbial biospheres are, under suitable conditions, generally able to prevent the host planets from reaching inhospitable temperatures, as would happen on a lifeless planet. We find that the underlying geochemistry plays a strong role in determining long-term habitability prospects of a planet. We find five distinct classes of model planets, including clear examples of `Gaian bottlenecks' - a phenomenon whereby life either rapidly goes extinct leaving an inhospitable planet or survives indefinitely maintaining planetary habitability. These results suggest that life might play a crucial role in determining the long-term habitability of planets.
Policy redesign for solving the financial bottleneck in demand side management (DSM) in China
International Nuclear Information System (INIS)
Yu, Yongzhen
2010-01-01
DSM is one of the best and most practical policy tools for China to balance environmental protection and economic growth. However, the bottleneck lies in the lack of long-term, stable, sufficient and gradually increasing funds to flow into DSM projects. The author redesigns the practical 'system benefit charge (SBC)' policy, which will provide long-term and stable funding for DSM, the policy to facilitate the financial support from banking sector and capital market, and investigates the possibility of DSM funding from CDM projects. SBC is the best way to boost long-term stable and sufficient funding for DSM at present in China. The current low inflation rate and natural resource price are favored to expedite the implementation of SBC and DSM developments. With regard to the uneven development, China needs to design relative policies to offset the impact in different areas, such as tax reduction and fiscal subsides. It is time for China to develop a definite and clear target and timetable to implement DSM, which will give the public and enterprises a definite and clear expectation for the future. The government should publicize a clear and integrated DSM development plan and relative policy outline in the near, medium, and long term. (author)
Photodegradation alleviates the lignin bottleneck for carbon turnover in terrestrial ecosystems.
Austin, Amy T; Méndez, M Soledad; Ballaré, Carlos L
2016-04-19
A mechanistic understanding of the controls on carbon storage and losses is essential for our capacity to predict and mitigate human impacts on the global carbon cycle. Plant litter decomposition is an important first step for carbon and nutrient turnover, and litter inputs and losses are essential in determining soil organic matter pools and the carbon balance in terrestrial ecosystems. Photodegradation, the photochemical mineralization of organic matter, has been recently identified as a mechanism for previously unexplained high rates of litter mass loss in arid lands; however, the global significance of this process as a control on carbon cycling in terrestrial ecosystems is not known. Here we show that, across a wide range of plant species, photodegradation enhanced subsequent biotic degradation of leaf litter. Moreover, we demonstrate that the mechanism for this enhancement involves increased accessibility to plant litter carbohydrates for microbial enzymes. Photodegradation of plant litter, driven by UV radiation, and especially visible (blue-green) light, reduced the structural and chemical bottleneck imposed by lignin in secondary cell walls. In leaf litter from woody species, specific interactions with UV radiation obscured facilitative effects of solar radiation on biotic decomposition. The generalized effect of sunlight exposure on subsequent microbial activity, mediated by increased accessibility to cell wall polysaccharides, suggests that photodegradation is quantitatively important in determining rates of mass loss, nutrient release, and the carbon balance in a broad range of terrestrial ecosystems.
Parallelization of the preconditioned IDR solver for modern multicore computer systems
Bessonov, O. A.; Fedoseyev, A. I.
2012-10-01
This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).
Status and Perspective of the Hydraulic Solver development for SPACE code
International Nuclear Information System (INIS)
Lee, S. Y.; Oh, M. T.; Park, J. C.; Ahn, S. J.; Park, C. E.; Lee, E. J.; Na, Y. W.
2008-01-01
KOPEC has been developing a hydraulic solver for SPACE code. The governing equations for the solver can be obtained through several steps of modeling and approximations from the basic material transport principles. Once the governing equations are fixed, a proper discretization procedure should be followed to get the difference equations that can be solved by well established matrix solvers. Of course, the mesh generation and handling procedures are necessary for the discretization process. At present, the preliminary test version has been constructed and being tested. The selection of the compiler language was debated openly. C++ was chosen as a basis compiler language. But other language such as FORTRAN can be used as it is necessary. The steps mentioned above are explained in the following sections. Test results are presented by other companion papers in this meeting. Future activities will be described in the conclusion section
A Parallel Multigrid Solver for Viscous Flows on Anisotropic Structured Grids
Prieto, Manuel; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper presents an efficient parallel multigrid solver for speeding up the computation of a 3-D model that treats the flow of a viscous fluid over a flat plate. The main interest of this simulation lies in exhibiting some basic difficulties that prevent optimal multigrid efficiencies from being achieved. As the computing platform, we have used Coral, a Beowulf-class system based on Intel Pentium processors and equipped with GigaNet cLAN and switched Fast Ethernet networks. Our study not only examines the scalability of the solver but also includes a performance evaluation of Coral where the investigated solver has been used to compare several of its design choices, namely, the interconnection network (GigaNet versus switched Fast-Ethernet) and the node configuration (dual nodes versus single nodes). As a reference, the performance results have been compared with those obtained with the NAS-MG benchmark.
A Kohn–Sham equation solver based on hexahedral finite elements
International Nuclear Information System (INIS)
Fang Jun; Gao Xingyu; Zhou Aihui
2012-01-01
We design a Kohn–Sham equation solver based on hexahedral finite element discretizations. The solver integrates three schemes proposed in this paper. The first scheme arranges one a priori locally-refined hexahedral mesh with appropriate multiresolution. The second one is a modified mass-lumping procedure which accelerates the diagonalization in the self-consistent field iteration. The third one is a finite element recovery method which enhances the eigenpair approximations with small extra work. We carry out numerical tests on each scheme to investigate the validity and efficiency, and then apply them to calculate the ground state total energies of nanosystems C 60 , C 120 , and C 275 H 172 . It is shown that our solver appears to be computationally attractive for finite element applications in electronic structure study.
Towards Green Multi-frontal Solver for Adaptive Finite Element Method
AbbouEisha, H.; Moshkov, Mikhail; Jopek, K.; Gepner, P.; Kitowski, J.; Paszyn'ski, M.
2015-01-01
In this paper we present the optimization of the energy consumption for the multi-frontal solver algorithm executed over two dimensional grids with point singularities. The multi-frontal solver algorithm is controlled by so-called elimination tree, defining the order of elimination of rows from particular frontal matrices, as well as order of memory transfers for Schur complement matrices. For a given mesh there are many possible elimination trees resulting in different number of floating point operations (FLOPs) of the solver or different amount of data trans- ferred via memory transfers. In this paper we utilize the dynamic programming optimization procedure and we compare elimination trees optimized with respect to FLOPs with elimination trees optimized with respect to energy consumption.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
International Nuclear Information System (INIS)
Secher, Bernard; Belliard, Michel; Calvin, Christophe
2009-01-01
This paper describes a tool called 'Numerical Platon' developed by the French Atomic Energy Commission (CEA). It provides a freely available (GNU LGPL license) interface for coupling scientific computing applications to various freeware linear solver libraries (essentially PETSc, SuperLU and HyPre), together with some proprietary CEA solvers, for high-performance computers that may be used in industrial software written in various programming languages. This tool was developed as part of considerable efforts by the CEA Nuclear Energy Division in the past years to promote massively parallel software and on-shelf parallel tools to help develop new generation simulation codes. After the presentation of the package architecture and the available algorithms, we show examples of how Numerical Platon is used in sequential and parallel CEA codes. Comparing with in-house solvers, the gain in terms of increases in computation capacities or in terms of parallel performances is notable, without considerable extra development cost
Towards Green Multi-frontal Solver for Adaptive Finite Element Method
AbbouEisha, H.
2015-06-01
In this paper we present the optimization of the energy consumption for the multi-frontal solver algorithm executed over two dimensional grids with point singularities. The multi-frontal solver algorithm is controlled by so-called elimination tree, defining the order of elimination of rows from particular frontal matrices, as well as order of memory transfers for Schur complement matrices. For a given mesh there are many possible elimination trees resulting in different number of floating point operations (FLOPs) of the solver or different amount of data trans- ferred via memory transfers. In this paper we utilize the dynamic programming optimization procedure and we compare elimination trees optimized with respect to FLOPs with elimination trees optimized with respect to energy consumption.
Energy Technology Data Exchange (ETDEWEB)
Secher, Bernard [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Saclay DM2S/SFME/LGLS, Bat. 454, F-91191 Gif-sur-Yvette Cedex (France)], E-mail: bsecher@cea.fr; Belliard, Michel [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Cadarache DER/SSTH/LMDL, Bat. 238, F-13108 Saint-Paul-lez-Durance Cedex (France); Calvin, Christophe [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Saclay DM2S/SERMA/LLPR, Bat. 470, F-91191 Gif-sur-Yvette Cedex (France)
2009-01-15
This paper describes a tool called 'Numerical Platon' developed by the French Atomic Energy Commission (CEA). It provides a freely available (GNU LGPL license) interface for coupling scientific computing applications to various freeware linear solver libraries (essentially PETSc, SuperLU and HyPre), together with some proprietary CEA solvers, for high-performance computers that may be used in industrial software written in various programming languages. This tool was developed as part of considerable efforts by the CEA Nuclear Energy Division in the past years to promote massively parallel software and on-shelf parallel tools to help develop new generation simulation codes. After the presentation of the package architecture and the available algorithms, we show examples of how Numerical Platon is used in sequential and parallel CEA codes. Comparing with in-house solvers, the gain in terms of increases in computation capacities or in terms of parallel performances is notable, without considerable extra development cost.
A fast direct solver for boundary value problems on locally perturbed geometries
Zhang, Yabin; Gillman, Adrianna
2018-03-01
Many applications including optimal design and adaptive discretization techniques involve solving several boundary value problems on geometries that are local perturbations of an original geometry. This manuscript presents a fast direct solver for boundary value problems that are recast as boundary integral equations. The idea is to write the discretized boundary integral equation on a new geometry as a low rank update to the discretized problem on the original geometry. Using the Sherman-Morrison formula, the inverse can be expressed in terms of the inverse of the original system applied to the low rank factors and the right hand side. Numerical results illustrate for problems where perturbation is localized the fast direct solver is three times faster than building a new solver from scratch.
A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements
Chávez, Gustavo
2017-03-17
A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.
Wavelet-Based Poisson Solver for Use in Particle-in-Cell Simulations
Terzic, Balsa; Mihalcea, Daniel; Pogorelov, Ilya V
2005-01-01
We report on a successful implementation of a wavelet-based Poisson solver for use in 3D particle-in-cell simulations. One new aspect of our algorithm is its ability to treat the general (inhomogeneous) Dirichlet boundary conditions. The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modelling of the Fermilab/NICADD and AES/JLab photoinjectors.
Wavelet-based Poisson Solver for use in Particle-In-Cell Simulations
International Nuclear Information System (INIS)
Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.
2005-01-01
We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors
Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan
2017-12-01
Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.
Abir, Mahshid; Davis, Matthew M; Sankar, Pratap; Wong, Andrew C; Wang, Stewart C
2013-02-01
To design and test a model to predict surge capacity bottlenecks at a large academic medical center in response to a mass-casualty incident (MCI) involving multiple burn victims. Using the simulation software ProModel, a model of patient flow and anticipated resource use, according to principles of disaster management, was developed based upon historical data from the University Hospital of the University of Michigan Health System. Model inputs included: (a) age and weight distribution for casualties, and distribution of size and depth of burns; (b) rate of arrival of casualties to the hospital, and triage to ward or critical care settings; (c) eligibility for early discharge of non-MCI inpatients at time of MCI; (d) baseline occupancy of intensive care unit (ICU), surgical step-down, and ward; (e) staff availability-number of physicians, nurses, and respiratory therapists, and the expected ratio of each group to patients; (f) floor and operating room resources-anticipating the need for mechanical ventilators, burn care and surgical resources, blood products, and intravenous fluids; (g) average hospital length of stay and mortality rate for patients with inhalation injury and different size burns; and (h) average number of times that different size burns undergo surgery. Key model outputs include time to bottleneck for each limiting resource and average waiting time to hospital bed availability. Given base-case model assumptions (including 100 mass casualties with an inter-arrival rate to the hospital of one patient every three minutes), hospital utilization is constrained within the first 120 minutes to 21 casualties, due to the limited number of beds. The first bottleneck is attributable to exhausting critical care beds, followed by floor beds. Given this limitation in number of patients, the temporal order of the ensuing bottlenecks is as follows: Lactated Ringer's solution (4 h), silver sulfadiazine/Silvadene (6 h), albumin (48 h), thrombin topical (72 h), type
International Nuclear Information System (INIS)
Nelson, E.M.
1993-12-01
Some two-dimensional finite element electromagnetic field solvers are described and tested. For TE and TM modes in homogeneous cylindrical waveguides and monopole modes in homogeneous axisymmetric structures, the solvers find approximate solutions to a weak formulation of the wave equation. Second-order isoparametric lagrangian triangular elements represent the field. For multipole modes in axisymmetric structures, the solver finds approximate solutions to a weak form of the curl-curl formulation of Maxwell's equations. Second-order triangular edge elements represent the radial (ρ) and axial (z) components of the field, while a second-order lagrangian basis represents the azimuthal (φ) component of the field weighted by the radius ρ. A reduced set of basis functions is employed for elements touching the axis. With this basis the spurious modes of the curl-curl formulation have zero frequency, so spurious modes are easily distinguished from non-static physical modes. Tests on an annular ring, a pillbox and a sphere indicate the solutions converge rapidly as the mesh is refined. Computed eigenvalues with relative errors of less than a few parts per million are obtained. Boundary conditions for symmetric, periodic and symmetric-periodic structures are discussed and included in the field solver. Boundary conditions for structures with inversion symmetry are also discussed. Special corner elements are described and employed to improve the accuracy of cylindrical waveguide and monopole modes with singular fields at sharp corners. The field solver is applied to three problems: (1) cross-field amplifier slow-wave circuits, (2) a detuned disk-loaded waveguide linear accelerator structure and (3) a 90 degrees overmoded waveguide bend. The detuned accelerator structure is a critical application of this high accuracy field solver. To maintain low long-range wakefields, tight design and manufacturing tolerances are required
A high-performance Riccati based solver for tree-structured quadratic programs
DEFF Research Database (Denmark)
Frison, Gianluca; Kouzoupis, Dimitris; Diehl, Moritz
2017-01-01
the online solution of such problems challenging and the development of tailored solvers crucial. In this paper, an interior point method is presented that can solve Quadratic Programs (QPs) arising in multi-stage MPC efficiently by means of a tree-structured Riccati recursion and a high-performance linear...... algebra library. A performance comparison with code-generated and general purpose sparse QP solvers shows that the computation times can be significantly reduced for all problem sizes that are practically relevant in embedded MPC applications. The presented implementation is freely available as part...
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe
2013-05-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
A Comparison Between Mıcrosoft Excel Solver and Ncss, Spss Routines for Nonlinear Regression Models
Directory of Open Access Journals (Sweden)
Didem Tetik Küçükelçi
2018-02-01
Full Text Available In this study we have tried to compare the results obtained by Microsoft Excel Solver program with those of NCSS and SPSS in some nonlinear regression models. We fit some nonlinear models to data present in http//itl.nist.gov/div898/strd/nls/nls_main.shtml by the three packages. Although EXCEL did not succeed as well as the other packages, we conclude that Microsoft Excel Solver provides us a cheaper and a more interactive way of studying nonlinear models.
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe; Ghaffari-Miab, Mohsen; Andriulli, Francesco P.; Cools, Kristof; Michielssen,
2013-01-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
Collier, Nathan; Pardo, David; Dalcí n, Lisandro D.; Paszyński, Maciej R.; Calo, Victor M.
2012-01-01
We study the performance of direct solvers on linear systems of equations resulting from isogeometric analysis. The problem of choice is the canonical Laplace equation in three dimensions. From this study we conclude that for a fixed number of unknowns and polynomial degree of approximation, a higher degree of continuity k drastically increases the CPU time and RAM needed to solve the problem when using a direct solver. This paper presents numerical results detailing the phenomenon as well as a theoretical analysis that explains the underlying cause. © 2011 Elsevier B.V.
Collier, Nathan
2012-03-01
We study the performance of direct solvers on linear systems of equations resulting from isogeometric analysis. The problem of choice is the canonical Laplace equation in three dimensions. From this study we conclude that for a fixed number of unknowns and polynomial degree of approximation, a higher degree of continuity k drastically increases the CPU time and RAM needed to solve the problem when using a direct solver. This paper presents numerical results detailing the phenomenon as well as a theoretical analysis that explains the underlying cause. © 2011 Elsevier B.V.
Development of a global toroidal gyrokinetic Vlasov code with new real space field solver
International Nuclear Information System (INIS)
Obrejan, Kevin; Imadera, Kenji; Li, Ji-Quan; Kishimoto, Yasuaki
2015-01-01
This work introduces a new full-f toroidal gyrokinetic (GK) Vlasov simulation code that uses a real space field solver. This solver enables us to compute the gyro-averaging operators in real space to allow proper treatment of finite Larmor radius (FLR) effects without requiring any particular hypothesis and in any magnetic field configuration (X-point, D-shaped etc). The code was well verified through benchmark tests such as toroidal Ion Temperature Gradient (ITG) instability and collisionless damping of zonal flow. (author)
Galerkin CFD solvers for use in a multi-disciplinary suite for modeling advanced flight vehicles
Moffitt, Nicholas J.
This work extends existing Galerkin CFD solvers for use in a multi-disciplinary suite. The suite is proposed as a means of modeling advanced flight vehicles, which exhibit strong coupling between aerodynamics, structural dynamics, controls, rigid body motion, propulsion, and heat transfer. Such applications include aeroelastics, aeroacoustics, stability and control, and other highly coupled applications. The suite uses NASA STARS for modeling structural dynamics and heat transfer. Aerodynamics, propulsion, and rigid body dynamics are modeled in one of the five CFD solvers below. Euler2D and Euler3D are Galerkin CFD solvers created at OSU by Cowan (2003). These solvers are capable of modeling compressible inviscid aerodynamics with modal elastics and rigid body motion. This work reorganized these solvers to improve efficiency during editing and at run time. Simple and efficient propulsion models were added, including rocket, turbojet, and scramjet engines. Viscous terms were added to the previous solvers to create NS2D and NS3D. The viscous contributions were demonstrated in the inertial and non-inertial frames. Variable viscosity (Sutherland's equation) and heat transfer boundary conditions were added to both solvers but not verified in this work. Two turbulence models were implemented in NS2D and NS3D: Spalart-Allmarus (SA) model of Deck, et al. (2002) and Menter's SST model (1994). A rotation correction term (Shur, et al., 2000) was added to the production of turbulence. Local time stepping and artificial dissipation were adapted to each model. CFDsol is a Taylor-Galerkin solver with an SA turbulence model. This work improved the time accuracy, far field stability, viscous terms, Sutherland?s equation, and SA model with NS3D as a guideline and added the propulsion models from Euler3D to CFDsol. Simple geometries were demonstrated to utilize current meshing and processing capabilities. Air-breathing hypersonic flight vehicles (AHFVs) represent the ultimate
Nearly Interactive Parabolized Navier-Stokes Solver for High Speed Forebody and Inlet Flows
Benson, Thomas J.; Liou, May-Fun; Jones, William H.; Trefny, Charles J.
2009-01-01
A system of computer programs is being developed for the preliminary design of high speed inlets and forebodies. The system comprises four functions: geometry definition, flow grid generation, flow solver, and graphics post-processor. The system runs on a dedicated personal computer using the Windows operating system and is controlled by graphical user interfaces written in MATLAB (The Mathworks, Inc.). The flow solver uses the Parabolized Navier-Stokes equations to compute millions of mesh points in several minutes. Sample two-dimensional and three-dimensional calculations are demonstrated in the paper.
Sriyudthsak, Kansuporn; Shiraishi, Fumihide
2010-11-01
A number of recent research studies have focused on theoretical and experimental investigation of a bottleneck in a metabolic reaction network. However, there is no study on how the bottleneck affects the performance of a fermentation process when a product is highly toxic and remarkably influences the growth and death of cells. The present work therefore studies the effect of bottleneck on product concentrations under different product toxicity conditions. A generalized bottleneck model in a fed-batch fermentation is constructed including both the bottleneck and the product influences on cell growth and death. The simulation result reveals that when the toxic product strongly influences the cell growth and death, the final product concentration is hardly changed even if the bottleneck is removed, whereas it is markedly changed by the degree of product toxicity. The performance of an ethanol fermentation process is also discussed as a case example to validate this result. In conclusion, when the product is highly toxic, one cannot expect a significant increase in the final product concentration even if removing the bottleneck; rather, it may be more effective to somehow protect the cells so that they can continuously produce the product. Copyright © 2010 Elsevier Inc. All rights reserved.
Friess, Daniel A.; Krauss, Ken W.; Horstman, Erik M.; Balke, Thorsten; Bouma, Tjeerd J.; Galli, Demis; Webb, Edward L.
2011-01-01
Intertidal wetlands such as saltmarshes and mangroves provide numerous important ecological functions, though they are in rapid and global decline. To better conserve and restore these wetland ecosystems, we need an understanding of the fundamental natural bottlenecks and thresholds to their establishment and long-term ecological maintenance. Despite inhabiting similar intertidal positions, the biological traits of these systems differ markedly in structure, phenology, life history, phylogeny and dispersal, suggesting large differences in biophysical interactions. By providing the first systematic comparison between saltmarshes and mangroves, we unravel how the interplay between species-specific life-history traits, biophysical interactions and biogeomorphological feedback processes determine where, when and what wetland can establish, the thresholds to long-term ecosystem stability, and constraints to genetic connectivity between intertidal wetland populations at the landscape level. To understand these process interactions, research into the constraints to wetland development, and biological adaptations to overcome these critical bottlenecks and thresholds requires a truly interdisciplinary approach.
Khurmi, Manpreet Singh; Sayinzoga, Felix; Berhe, Atakilt; Bucyana, Tatien; Mwali, Assumpta Kayinamura; Manzi, Emmanuel; Muthu, Maharajan
2017-01-01
The Newborn Survival Case study in Rwanda provides an analysis of the newborn health and survival situation in the country. It reviews evidence-based interventions and coverage levels already implemented in the country; identifies key issues and bottlenecks in service delivery and uptake of services by community/beneficiaries, and provides key recommendations aimed at faster reduction in newborn mortality rate. This study utilized mixed method research including qualitative and quantitative analyses of various maternal and newborn health programs implemented in the country. This included interviewing key stakeholders at each level, field visits and also interviewing beneficiaries for assessment of uptake of services. Monitoring systems such as Health Management Information Systems (HMIS), maternal and newborn death audits were reviewed and data analyzed to aid these analyses. Policies, protocols, various guidelines and tools for monitoring are already in place however, implementation of these remains a challenge e.g. infection control practices to reduce deaths due to sepsis. Although existing staff are quite knowledgeable and are highly motivated, however, shortage of health personnel especially doctors in an issue. New facilities are being operationalized e.g. at Gisenyi, however, the existing facilities needs expansion. It is essential to implement high impact evidence based interventions but coverage levels need to be significantly high in order to achieve higher reduction in newborn mortality rate. Equity approach should be considered in planning so that the services are better implemented and the poor and needy can get the benefits of public health programs.
Pang-Ching, Joshua M.; Paxton, Kristina L.; Paxton, Eben H.; Pack, Adam A.; Hart, Patrick J.
2018-01-01
Little is known about how important social behaviors such as song vary within and among populations for any of the endemic Hawaiian honeycreepers. Habitat loss and non‐native diseases (e.g., avian malaria) have resulted in isolation and fragmentation of Hawaiian honeycreepers within primarily high elevation forests. In this study, we examined how isolation of Hawai'i ‘amakihi (Chlorodrepanis virens) populations within a fragmented landscape influences acoustic variability in song. In the last decade, small, isolated populations of disease tolerant ‘amakihi have been found within low elevation forests, allowing us to record ‘amakihi songs across a large elevational gradient (10–1800 m) that parallels disease susceptibility on Hawai'i island. To understand underlying differences among populations, we examined the role of geographic distance, elevation, and habitat structure on acoustic characteristics of ‘amakihi songs. We found that the acoustic characteristics of ‘amakihi songs and song‐type repertoires varied most strongly across an elevational gradient. Differences in ‘amakihi song types were primarily driven by less complex songs (e.g., fewer frequency changes, shorter songs) of individuals recorded at low elevation sites compared to mid and high elevation populations. The reduced complexity of ‘amakihi songs at low elevation sites is most likely shaped by the effects of habitat fragmentation and a disease‐driven population bottleneck associated with avian malaria, and maintained through isolation, localized song learning and sharing, and cultural drift. These results highlight how a non‐native disease through its influence on population demographics may have also indirectly played a role in shaping the acoustic characteristics of a species.
Gómez-Brandón, María; Aira, Manuel; Lores, Marta; Domínguez, Jorge
2011-01-01
Earthworms play a critical role in organic matter decomposition because of the interactions they establish with microorganisms. The ingestion, digestion, assimilation of organic material in the gut and then casting is the first step in earthworm-microorganism interactions. The current knowledge of these direct effects is still limited for epigeic earthworm species, mainly those living in man-made environments. Here we tested whether and to what extent the earthworm Eisenia andrei is capable of altering the microbiological properties of fresh organic matter through gut associated processes; and if these direct effects are related to the earthworm diet. To address these questions we determined the microbial community structure (phospholipid fatty acid profiles) and microbial activity (fluorescein diacetate hydrolysis) in the earthworm casts derived from three types of animal manure (cow, horse and pig manure), which differed in microbial composition. The passage of the organic material through the gut of E. andrei reduced the total microbial biomass irrespective of the type of manure, and resulted in a decrease in bacterial biomass in all the manures; whilst leaving the fungi unaffected in the egested materials. However, unlike the microbial biomass, no such reduction was detected in the total microbial activity of cast samples derived from the pig manure. Moreover, no differences were found between cast samples derived from the different types of manure with regards to microbial community structure, which provides strong evidence for a bottleneck effect of worm digestion on microbial populations of the original material consumed. Our data reveal that earthworm gut is a major shaper of microbial communities, thereby favouring the existence of a reduced but more active microbial population in the egested materials, which is of great importance to understand how biotic interactions within the decomposer food web influence on nutrient cycling.
Directory of Open Access Journals (Sweden)
María Gómez-Brandón
Full Text Available BACKGROUND: Earthworms play a critical role in organic matter decomposition because of the interactions they establish with microorganisms. The ingestion, digestion, assimilation of organic material in the gut and then casting is the first step in earthworm-microorganism interactions. The current knowledge of these direct effects is still limited for epigeic earthworm species, mainly those living in man-made environments. Here we tested whether and to what extent the earthworm Eisenia andrei is capable of altering the microbiological properties of fresh organic matter through gut associated processes; and if these direct effects are related to the earthworm diet. METHODOLOGY: To address these questions we determined the microbial community structure (phospholipid fatty acid profiles and microbial activity (fluorescein diacetate hydrolysis in the earthworm casts derived from three types of animal manure (cow, horse and pig manure, which differed in microbial composition. PRINCIPAL FINDINGS: The passage of the organic material through the gut of E. andrei reduced the total microbial biomass irrespective of the type of manure, and resulted in a decrease in bacterial biomass in all the manures; whilst leaving the fungi unaffected in the egested materials. However, unlike the microbial biomass, no such reduction was detected in the total microbial activity of cast samples derived from the pig manure. Moreover, no differences were found between cast samples derived from the different types of manure with regards to microbial community structure, which provides strong evidence for a bottleneck effect of worm digestion on microbial populations of the original material consumed. CONCLUSIONS/SIGNIFICANCE: Our data reveal that earthworm gut is a major shaper of microbial communities, thereby favouring the existence of a reduced but more active microbial population in the egested materials, which is of great importance to understand how biotic interactions
Fan, Liqiang; Zheng, Honglei; Milne, Richard I; Zhang, Lei; Mao, Kangshan
2018-03-14
Glacial refugia and inter-/postglacial recolonization routes during the Quaternary of tree species in Europe and North America are well understood, but far less is known about those of tree species in subtropical eastern Asia. Thus, we have examined the phylogeographic history of Populus adenopoda (Salicaceae), one of the few poplars that naturally occur in this subtropical area. Genetic variations across the range of the species in subtropical China were surveyed using ten nuclear microsatellite loci and four chloroplast fragments (matK, trnG-psbK, psbK-psbI and ndhC-trnV). Coalescent-based analyses were used to test demographic and migration hypotheses. In addition, species distribution models (SDMs) were constructed to infer past, present and future potential distributions of the species. Thirteen chloroplast haplotypes were detected, and haplotype-rich populations were found in central and southern parts of the species' range. STRUCTURE analyses of nuclear microsatellite loci suggest obvious lineage admixture, especially in peripheral and northern populations. DIYABC analysis suggests that the species might have experienced two independent rounds of demographic expansions and a strong bottleneck in the late Quaternary. SDMs indicate that the species' range contracted during the Last Glacial Maximum (LGM), and contracted northward but expanded eastward during the Last Interglacial (LIG). Chloroplast data and SDMs suggest that P. adenopoda might have survived in multiple glacial refugia in central and southern parts of its range during the LGM. Populations of the Yunnan-Guizhou Plateau in the southern part have high chloroplast DNA diversity, but may have contributed little to the postglacial recolonization of northern and eastern parts. The three major demographic events inferred by DIYABC coincide with the initiation of the LIG, start of the LGM and end of the LGM, respectively. The species may have experienced multiple rounds of range contraction during
Addressing an I/O Bottleneck in a Web-Based CERES QC Tool
Heckert, E.; Sun-Mack, S.; Chen, Y.; Chu, C.; Smith, R. A.
2016-12-01
In this poster, we explore the technologies we have used to overcome the problem of transmitting and analyzing large datasets in our web-based CERES Quality Control tool and consider four technologies to potentially adopt for future performance improvements. The CERES team uses this tool to validate pixel-level data from Terra, Aqua, SNPP, MSG, MTSAT, and many geostationary GOES satellites, as well as to develop cloud retrieval algorithms. The tool includes a histogram feature that allows the user to aggregate data from many different timestamps and different scenes globally or locally selected by the user by drawing bounding boxes. In order to provide a better user experience, the tool passes a large amount of data to the user's browser. The browser then processes the data in order to present it to users in various formats, for example as a histogram. In addition to using multiple servers to subset data and pass a smaller set of data to the browser, the tool also makes use of a compression technology, Gzip, to reduce the size of the data. However, sometimes the application in the browser is still slow when dealing with these large sets of data due to the delay in the browser receiving the server's response. To address this I/O bottleneck, we will investigate four alternatives and present the results in this poster: 1) sending uncompressed data, 2) ESRI's Limited Error Raster Compression (LERC), 3) Gzip, and 4) WebSocket protocol. These approaches are compared to each other and to the uncompressed control to determine the optimal solution.
Population aging and migrant workers: bottlenecks in tuberculosis control in rural China.
Bele, Sumedh; Jiang, Wei; Lu, Hui; You, Hua; Fan, Hong; Huang, Lifang; Wang, Qungang; Shen, Hongbing; Wang, Jianming
2014-01-01
Tuberculosis is a serious global health problem. Its paradigms are shifting through time, especially in rapidly developing countries such as China. Health providers in China are at the forefront of the battle against tuberculosis; however, there are few empirical studies on health providers' perspectives on the challenges they face in tuberculosis control at the county level in China. This study was conducted among health providers to explore their experiences with tuberculosis control in order to identify bottlenecks and emerging challenges in controlling tuberculosis in rural China. A qualitative approach was used. Semi-structured, in-depth interviews were conducted with 17 health providers working in various positions within the health system of one rural county (ZJG) of China. Data were analyzed based on thematic content analysis using MAXQDA 10 qualitative data analysis software. Health providers reported several problems in tuberculosis control in ZJG county. Migrant workers and the elderly were repeatedly documented as the main obstacles in effective tuberculosis control in the county. At a personal level, doctors showed their frustration with the lack of new drugs for treating tuberculosis patients, and their opinions varied regarding incentives for referring patients. The results suggest that several problems still remain for controlling tuberculosis in rural China. Tuberculosis control efforts need to make reaching the most vulnerable populations a priority and encourage local health providers to adopt innovative practices in the local context based on national guidelines to achieve the best results. Considerable changes in China's National Tuberculosis Control Program are needed to tackle these emerging challenges faced by health workers at the county level.
Population aging and migrant workers: bottlenecks in tuberculosis control in rural China.
Directory of Open Access Journals (Sweden)
Sumedh Bele
Full Text Available BACKGROUND: Tuberculosis is a serious global health problem. Its paradigms are shifting through time, especially in rapidly developing countries such as China. Health providers in China are at the forefront of the battle against tuberculosis; however, there are few empirical studies on health providers' perspectives on the challenges they face in tuberculosis control at the county level in China. This study was conducted among health providers to explore their experiences with tuberculosis control in order to identify bottlenecks and emerging challenges in controlling tuberculosis in rural China. METHODS: A qualitative approach was used. Semi-structured, in-depth interviews were conducted with 17 health providers working in various positions within the health system of one rural county (ZJG of China. Data were analyzed based on thematic content analysis using MAXQDA 10 qualitative data analysis software. RESULTS: Health providers reported several problems in tuberculosis control in ZJG county. Migrant workers and the elderly were repeatedly documented as the main obstacles in effective tuberculosis control in the county. At a personal level, doctors showed their frustration with the lack of new drugs for treating tuberculosis patients, and their opinions varied regarding incentives for referring patients. CONCLUSION: The results suggest that several problems still remain for controlling tuberculosis in rural China. Tuberculosis control efforts need to make reaching the most vulnerable populations a priority and encourage local health providers to adopt innovative practices in the local context based on national guidelines to achieve the best results. Considerable changes in China's National Tuberculosis Control Program are needed to tackle these emerging challenges faced by health workers at the county level.
Sousa, Ana; Ramiro, Ricardo S; Barroso-Batista, João; Güleresi, Daniela; Lourenço, Marta; Gordo, Isabel
2017-11-01
The evolution of new strains within the gut ecosystem is poorly understood. We used a natural but controlled system to follow the emergence of intraspecies diversity of commensal Escherichia coli, during three rounds of adaptation to the mouse gut (∼1,300 generations). We previously showed that, in the first round, a strongly beneficial phenotype (loss-of-function for galactitol consumption; gat-negative) spread to >90% frequency in all colonized mice. Here, we show that this loss-of-function is repeatedly reversed when a gat-negative clone colonizes new mice. The regain of function occurs via compensatory mutation and reversion, the latter leaving no trace of past adaptation. We further show that loss-of-function adaptive mutants reevolve, after colonization with an evolved gat-positive clone. Thus, even under strong bottlenecks a regime of strong-mutation-strong-selection dominates adaptation. Coupling experiments and modeling, we establish that reverse evolution recurrently generates two coexisting phenotypes within the microbiota that can or not consume galactitol (gat-positive and gat-negative, respectively). Although the abundance of the dominant strain, the gat-negative, depends on the microbiota composition, gat-positive abundance is independent of the microbiota composition and can be precisely manipulated by supplementing the diet with galactitol. These results show that a specific diet is able to change the abundance of specific strains. Importantly, we find polymorphism for these phenotypes in indigenous Enterobacteria of mice and man. Our results demonstrate that natural selection can greatly overwhelm genetic drift at structuring the strain diversity of gut commensals and that competition for limiting resources may be a key mechanism for maintaining polymorphism in the gut. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Energy Technology Data Exchange (ETDEWEB)
Gallego, Antonio-Roman Munoz; Lucas, Manuela De; Casado, Eva; Ferrer, Miguel
2011-07-01
Full text: To assess and monitor the impact of wind farms on fauna is crucial if we want to achieve ecologically sustainable development of this renewable energy resource. Today there are clear evidences that the probability of raptor collision depends critically on species behaviour and weather conditions, and the topographic factors related to each windmill. In our study area EIA were not able to predict this differential risk and in these circumstances mitigating the causes of bird mortality becomes a task of major importance, especially to those wind farms located in the Strait of Gibraltar, a water crossing of 14 km at its shortest distance acting as a major migration bottleneck for Paleo-African soaring migrants. We collected all available information on raptor collision from 1992, when the first wind farm was installed, and from 2005 until present a total of 262 turbines, grouped into 20 wind farms, were surveyed in a daily basis through a surveillance program with the main goal of register the actual mortality of birds. A total of 1291 raptors of 19 species were found of which 78.5% correspond to two species, the griffon vulture (Gyps fulvus) and the kestrel (Falco tinnunculus). In order to mitigate the impact on raptors, and particularly on the griffon vulture, in 2007 a program based on selective stopping of turbines was imposed, in collaboration with the environmental competent authority, on new approved projects. During 2008 there was a reduction in mortality by 48%, which remained in 2009 with a remarkably lower economic cost. An analysis of the temporal collision patterns will be presented and discussed, with special attention to those species suffering higher mortality rate, and to those who have some degree of threat. (Author)
Kirchhoff, K N; Hauffe, T; Stelbrink, B; Albrecht, C; Wilke, T
2017-08-01
Species richness in freshwater bony fishes depends on two main processes: the transition into and the diversification within freshwater habitats. In contrast to bony fishes, only few cartilaginous fishes, mostly stingrays (Myliobatoidei), were able to colonize fresh water. Respective transition processes have been mainly assessed from a physiological and morphological perspective, indicating that the freshwater lifestyle is strongly limited by the ability to perform osmoregulatory adaptations. However, the transition history and the effect of physiological constraints on the diversification in stingrays remain poorly understood. Herein, we estimated the geographic pathways of freshwater colonization and inferred the mode of habitat transitions. Further, we assessed habitat-related speciation rates in a time-calibrated phylogenetic framework to understand factors driving the transition of stingrays into and the diversification within fresh water. Using South American and Southeast Asian freshwater taxa as model organisms, we found one independent freshwater colonization event by stingrays in South America and at least three in Southeast Asia. We revealed that vicariant processes most likely caused freshwater transition during the time of major marine incursions. The habitat transition rates indicate that brackish water species switch preferably back into marine than forth into freshwater habitats. Moreover, our results showed significantly lower diversification rates in brackish water lineages, whereas freshwater and marine lineages exhibit similar rates. Thus, brackish water habitats may have functioned as evolutionary bottlenecks for the colonization of fresh water by stingrays, probably because of the higher variability of environmental conditions in brackish water. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Xenikoudakis, G; Ersmark, E; Tison, J-L; Waits, L; Kindberg, J; Swenson, J E; Dalén, L
2015-07-01
The Scandinavian brown bear went through a major decline in population size approximately 100 years ago, due to intense hunting. After being protected, the population subsequently recovered and today numbers in the thousands. The genetic diversity in the contemporary population has been investigated in considerable detail, and it has been shown that the population consists of several subpopulations that display relatively high levels of genetic variation. However, previous studies have been unable to resolve the degree to which the demographic bottleneck impacted the contemporary genetic structure and diversity. In this study, we used mitochondrial and microsatellite DNA markers from pre- and postbottleneck Scandinavian brown bear samples to investigate the effect of the bottleneck. Simulation and multivariate analysis suggested the same genetic structure for the historical and modern samples, which are clustered into three subpopulations in southern, central and northern Scandinavia. However, the southern subpopulation appears to have gone through a marked change in allele frequencies. When comparing the mitochondrial DNA diversity in the whole population, we found a major decline in haplotype numbers across the bottleneck. However, the loss of autosomal genetic diversity was less pronounced, although a significant decline in allelic richness was observed in the southern subpopulation. Approximate Bayesian computations provided clear support for a decline in effective population size during the bottleneck, in both the southern and northern subpopulations. These results have implications for the future management of the Scandinavian brown bear because they indicate a recent loss in genetic diversity and also that the current genetic structure may have been caused by historical ecological processes rather than recent anthropogenic persecution. © 2015 John Wiley & Sons Ltd.
Saeed, Muhammad
2012-01-01
The thesis study reveals that the position of bottleneck is a significant importance in supplychain process. The modern supply chain is characterized as having diverse products due tomass customization, dynamic production technology and ever changing customer demand.Usually customized supply chain process consists of an assemble to order (ATO) or make-to-order (MTO) type of operation. By controlling the supply constraints at upstream, a smoothmaterial flow achieved at downstream. Effective ma...
Directory of Open Access Journals (Sweden)
Damien C Tully
2016-05-01
Full Text Available Due to the stringent population bottleneck that occurs during sexual HIV-1 transmission, systemic infection is typically established by a limited number of founder viruses. Elucidation of the precise forces influencing the selection of founder viruses may reveal key vulnerabilities that could aid in the development of a vaccine or other clinical interventions. Here, we utilize deep sequencing data and apply a genetic distance-based method to investigate whether the mode of sexual transmission shapes the nascent founder viral genome. Analysis of 74 acute and early HIV-1 infected subjects revealed that 83% of men who have sex with men (MSM exhibit a single founder virus, levels similar to those previously observed in heterosexual (HSX transmission. In a metadata analysis of a total of 354 subjects, including HSX, MSM and injecting drug users (IDU, we also observed no significant differences in the frequency of single founder virus infections between HSX and MSM transmissions. However, comparison of HIV-1 envelope sequences revealed that HSX founder viruses exhibited a greater number of codon sites under positive selection, as well as stronger transmission indices possibly reflective of higher fitness variants. Moreover, specific genetic "signatures" within MSM and HSX founder viruses were identified, with single polymorphisms within gp41 enriched among HSX viruses while more complex patterns, including clustered polymorphisms surrounding the CD4 binding site, were enriched in MSM viruses. While our findings do not support an influence of the mode of sexual transmission on the number of founder viruses, they do demonstrate that there are marked differences in the selection bottleneck that can significantly shape their genetic composition. This study illustrates the complex dynamics of the transmission bottleneck and reveals that distinct genetic bottleneck processes exist dependent upon the mode of HIV-1 transmission.
Frise, Rebecca; Bradley, Konrad; van Doremalen, Neeltje; Galiano, Monica; Elderfield, Ruth A.; Stilwell, Peter; Ashcroft, Jonathan W.; Fernandez-Alonso, Mirian; Miah, Shahjahan; Lackenby, Angie; Roberts, Kim L.; Donnelly, Christl A.; Barclay, Wendy S.
2016-01-01
Influenza viruses cause annual seasonal epidemics and occasional pandemics. It is important to elucidate the stringency of bottlenecks during transmission to shed light on mechanisms that underlie the evolution and propagation of antigenic drift, host range switching or drug resistance. The virus spreads between people by different routes, including through the air in droplets and aerosols, and by direct contact. By housing ferrets under different conditions, it is possible to mimic various routes of transmission. Here, we inoculated donor animals with a mixture of two viruses whose genomes differed by one or two reverse engineered synonymous mutations, and measured the transmission of the mixture to exposed sentinel animals. Transmission through the air imposed a tight bottleneck since most recipient animals became infected by only one virus. In contrast, a direct contact transmission chain propagated a mixture of viruses suggesting the dose transferred by this route was higher. From animals with a mixed infection of viruses that were resistant and sensitive to the antiviral drug oseltamivir, resistance was propagated through contact transmission but not by air. These data imply that transmission events with a looser bottleneck can propagate minority variants and may be an important route for influenza evolution. PMID:27430528
Sutton, Jolene T; Robertson, Bruce C; Grueber, Catherine E; Stanton, Jo-Ann L; Jamieson, Ian G
2013-08-01
The major histocompatibility complex (MHC) is integral to the vertebrate adaptive immune system. Characterizing diversity at functional MHC genes is invaluable for elucidating patterns of adaptive variation in wild populations, and is particularly interesting in species of conservation concern, which may suffer from reduced genetic diversity and compromised disease resilience. Here, we use next generation sequencing to investigate MHC class II B (MHCIIB) diversity in two sister taxa of New Zealand birds: South Island saddleback (SIS), Philesturnus carunculatus, and North Island saddleback (NIS), Philesturnus rufusater. These two species represent a passerine family outside the more extensively studied Passerida infraorder, and both have experienced historic bottlenecks. We examined exon 2 sequence data from populations that represent the majority of genetic diversity remaining in each species. A high level of locus co-amplification was detected, with from 1 to 4 and 3 to 12 putative alleles per individual for South and North Island birds, respectively. We found strong evidence for historic balancing selection in peptide-binding regions of putative alleles, and we identified a cluster combining non-classical loci and pseudogene sequences from both species, although no sequences were shared between the species. Fewer total alleles and fewer alleles per bird in SIS may be a consequence of their more severe bottleneck history; however, overall nucleotide diversity was similar between the species. Our characterization of MHCIIB diversity in two closely related species of New Zealand saddlebacks provides an important step in understanding the mechanisms shaping MHC diversity in wild, bottlenecked populations.
Directory of Open Access Journals (Sweden)
Catherine E Grueber
Full Text Available Toll-like receptors (TLRs are an ancient family of genes encoding transmembrane proteins that bind pathogen-specific molecules and initiate both innate and adaptive aspects of the immune response. Our goal was to determine whether these genes show sufficient genetic diversity in a bottlenecked population to be a useful addition or alternative to the more commonly employed major histocompatibility complex (MHC genotyping in a conservation genetics context. We amplified all known avian TLR genes in a severely bottlenecked population of New Zealand's Stewart Island robin (Petroica australis rakiura, for which reduced microsatellite diversity was previously observed. We genotyped 17-24 birds from a reintroduced island population (including the 12 founders for nine genes, seven of which were polymorphic. We observed a total of 24 single-nucleotide polymorphisms overall, 15 of which were non-synonymous, representing up to five amino-acid variants at a locus. One locus (TLR1LB showed evidence of past directional selection. Results also confirmed a passerine duplication of TLR7. The levels of TLR diversity that we observe are sufficient to justify their further use in addressing conservation genetic questions, even in bottlenecked populations.
A multilevel in space and energy solver for multigroup diffusion eigenvalue problems
Directory of Open Access Journals (Sweden)
Ben C. Yee
2017-09-01
Full Text Available In this paper, we present a new multilevel in space and energy diffusion (MSED method for solving multigroup diffusion eigenvalue problems. The MSED method can be described as a PI scheme with three additional features: (1 a grey (one-group diffusion equation used to efficiently converge the fission source and eigenvalue, (2 a space-dependent Wielandt shift technique used to reduce the number of PIs required, and (3 a multigrid-in-space linear solver for the linear solves required by each PI step. In MSED, the convergence of the solution of the multigroup diffusion eigenvalue problem is accelerated by performing work on lower-order equations with only one group and/or coarser spatial grids. Results from several Fourier analyses and a one-dimensional test code are provided to verify the efficiency of the MSED method and to justify the incorporation of the grey diffusion equation and the multigrid linear solver. These results highlight the potential efficiency of the MSED method as a solver for multidimensional multigroup diffusion eigenvalue problems, and they serve as a proof of principle for future work. Our ultimate goal is to implement the MSED method as an efficient solver for the two-dimensional/three-dimensional coarse mesh finite difference diffusion system in the Michigan parallel characteristics transport code. The work in this paper represents a necessary step towards that goal.
High-performance small-scale solvers for linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd
2014-01-01
, with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...
Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices
DEFF Research Database (Denmark)
Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars
2014-01-01
This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...
A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver
Liu, Yang
2015-10-26
© 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT)-based surface integral equation (SIE) solvers, it reduces the computational and memory costs of transient analysis from equation and equation to equation and equation, respectively, where Nt and Ns denote the number of temporal and spatial unknowns (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). In the past, PWTD-accelerated MOT-SIE solvers have been applied to transient problems involving half million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). Recently, a scalable parallel PWTD-accelerated MOT-SIE solver that leverages a hiearchical parallelization strategy has been developed and successfully applied to the transient problems involving ten million spatial unknowns (Liu et. al., in URSI Digest, 2013). We further enhanced the capabilities of this solver by implementing a compression scheme based on local cosine wavelet bases (LCBs) that exploits the sparsity in the temporal dimension (Liu et. al., in URSI Digest, 2014). Specifically, the LCB compression scheme was used to reduce the memory requirement of the PWTD ray data and computational cost of operations in the PWTD translation stage.
Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers
Woźniak, Maciej
2014-06-01
In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.
A fast mass spring model solver for high-resolution elastic objects
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
Integrated tokamak modelling with the fast-ion Fokker–Planck solver adapted for transient analyses
International Nuclear Information System (INIS)
Toma, M; Hamamatsu, K; Hayashi, N; Honda, M; Ide, S
2015-01-01
Integrated tokamak modelling that enables the simulation of an entire discharge period is indispensable for designing advanced tokamak plasmas. For this purpose, we extend the integrated code TOPICS to make it more suitable for transient analyses in the fast-ion part. The fast-ion Fokker–Planck solver is integrated into TOPICS at the same level as the bulk transport solver so that the time evolutions of the fast ion and the bulk plasma are consistent with each other as well as with the equilibrium magnetic field. The fast-ion solver simultaneously handles neutral beam-injected ions and alpha particles. Parallelisation of the fast-ion solver in addition to its computational lightness owing to a dimensional reduction in the phase space enables transient analyses for long periods in the order of tens of seconds. The fast-ion Fokker–Planck calculation is compared and confirmed to be in good agreement with an orbit following a Monte Carlo calculation. The integrated code is applied to ramp-up simulations for JT-60SA and ITER to confirm its capability and effectiveness in transient analyses. In the integrated simulations, the coupled evolution of the fast ions, plasma profiles, and equilibrium magnetic fields are presented. In addition, the electric acceleration effect on fast ions is shown and discussed. (paper)
Experimental validation of a boundary element solver for exterior acoustic radiation problems
Visser, Rene; Nilsson, A.; Boden, H.
2003-01-01
The relation between harmonic structural vibrations and the corresponding acoustic radiation is given by the Helmholtz integral equation (HIE). To solve this integral equation a new solver (BEMSYS) based on the boundary element method (BEM) has been implemented. This numerical tool can be used for
Status for the two-dimensional Navier-Stokes solver EllipSys2D
DEFF Research Database (Denmark)
Bertagnolio, F.; Sørensen, Niels N.; Johansen, J.
2001-01-01
This report sets up an evaluation of the two-dimensional Navier-Stokes solver EllipSys2D in its present state. This code is used for blade aerodynamics simulations in the Aeroelastic Design group at Risø. Two airfoils are investigated by computing theflow at several angles of attack ranging from...
Hybrid direct and iterative solvers for h refined grids with singularities
Paszyński, Maciej R.; Paszyńska, Anna; Dalcin, Lisandro; Calo, Victor M.
2015-01-01
on top of it. The hybrid solver is applied for two or three dimensional grids automatically h refined towards point or edge singularities. The automatic refinement is based on the relative error estimations between the coarse and fine mesh solutions [2
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
Development of a CANDU Moderator Analysis Model; Based on Coupled Solver
International Nuclear Information System (INIS)
Yoon, Churl; Park, Joo Hwan
2006-01-01
A CFD model for predicting the CANDU-6 moderator temperature has been developed for several years in KAERI, which is based on CFX-4. This analytic model(CFX4-CAMO) has some strength in the modeling of hydraulic resistance in the core region and in the treatment of heat source term in the energy equations. But the convergence difficulties and slow computing speed reveal to be the limitations of this model, because the CFX-4 code adapts a segregated solver to solve the governing equations with strong coupled-effect. Compared to CFX-4 using segregated solver, CFX-10 adapts high efficient and robust coupled-solver. Before December 2005 when CFX-10 was distributed, the previous version of CFX-10(CFX-5. series) also adapted coupled solver but didn't have any capability to apply porous media approaches correctly. In this study, the developed moderator analysis model based on CFX- 4 (CFX4-CAMO) is transformed into a new moderator analysis model based on CFX-10. The new model is examined and the results are compared to the former
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
International Nuclear Information System (INIS)
Fisicaro, G.; Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N.
2016-01-01
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.
Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
Modelling dynamic liquid-gas systems: Extensions to the volume-of-fluid solver
CSIR Research Space (South Africa)
Heyns, Johan A
2013-06-01
Full Text Available This study presents the extension of the volume-of-fluid solver, interFoam, for improved accuracy and efficiency when modelling dynamic liquid-gas systems. Examples of these include the transportation of liquids, such as in the case of fuel carried...
VDJSeq-Solver: in silico V(DJ recombination detection tool.
Directory of Open Access Journals (Sweden)
Giulia Paciello
Full Text Available In this paper we present VDJSeq-Solver, a methodology and tool to identify clonal lymphocyte populations from paired-end RNA Sequencing reads derived from the sequencing of mRNA neoplastic cells. The tool detects the main clone that characterises the tissue of interest by recognizing the most abundant V(DJ rearrangement among the existing ones in the sample under study. The exact sequence of the clone identified is capable of accounting for the modifications introduced by the enzymatic processes. The proposed tool overcomes limitations of currently available lymphocyte rearrangements recognition methods, working on a single sequence at a time, that are not applicable to high-throughput sequencing data. In this work, VDJSeq-Solver has been applied to correctly detect the main clone and identify its sequence on five Mantle Cell Lymphoma samples; then the tool has been tested on twelve Diffuse Large B-Cell Lymphoma samples. In order to comply with the privacy, ethics and intellectual property policies of the University Hospital and the University of Verona, data is available upon request to supporto.utenti@ateneo.univr.it after signing a mandatory Materials Transfer Agreement. VDJSeq-Solver JAVA/Perl/Bash software implementation is free and available at http://eda.polito.it/VDJSeq-Solver/.
Effects of high-frequency damping on iterative convergence of implicit viscous solver
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
A heterogeneous CPU+GPU Poisson solver for space charge calculations in beam dynamics studies
Energy Technology Data Exchange (ETDEWEB)
Zheng, Dawei; Rienen, Ursula van [University of Rostock, Institute of General Electrical Engineering (Germany)
2016-07-01
In beam dynamics studies in accelerator physics, space charge plays a central role in the low energy regime of an accelerator. Numerical space charge calculations are required, both, in the design phase and in the operation of the machines as well. Due to its efficiency, mostly the Particle-In-Cell (PIC) method is chosen for the space charge calculation. Then, the solution of Poisson's equation for the charge distribution in the rest frame is the most prominent part within the solution process. The Poisson solver directly affects the accuracy of the self-field applied on the charged particles when the equation of motion is solved in the laboratory frame. As the Poisson solver consumes the major part of the computing time in most simulations it has to be as fast as possible since it has to be carried out once per time step. In this work, we demonstrate a novel heterogeneous CPU+GPU routine for the Poisson solver. The novel solver also benefits from our new research results on the utilization of a discrete cosine transform within the classical Hockney and Eastwood's convolution routine.
A parallel direct solver for the self-adaptive hp Finite Element Method
Paszyński, Maciej R.; Pardo, David; Torres-Verdí n, Carlos; Demkowicz, Leszek F.; Calo, Victor M.
2010-01-01
measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements
2017-11-13
finite element flow solver JENRE developed at the Naval Research Laboratory. The Crocco- Busemann relation is used to account for the compressibility. In...3 1. Comparison with the measurement data...Naval Research Laboratory. The Crocco-Busemann relation is used to account for the compressibility. In this wall-model implementation, the first
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers
Woźniak, Maciej; Kuźnik, Krzysztof M.; Paszyński, Maciej R.; Calo, Victor M.; Pardo, D.
2014-01-01
In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.
An Analysis of Elliptic Grid Generation Techniques Using an Implicit Euler Solver.
1986-06-09
at M. =0.90 and a=00 is when interpolating for the radius of curvature obtained. One expects the computed shock strength (r), a second examination is...solver to yield accurate second-order, ... v.s zd solutions. References Snn, .:-P.. Flr.e ’rference Methods In Z, .tational Fluid DinamIcs , to he published
Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?
Ravinder, Handanhal V.
2013-01-01
A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…
Mathematical Tasks without Words and Word Problems: Perceptions of Reluctant Problem Solvers
Holbert, Sydney Margaret
2013-01-01
This qualitative research study used a multiple, holistic case study approach (Yin, 2009) to explore the perceptions of reluctant problem solvers related to mathematical tasks without words and word problems. Participants were given a choice of working a mathematical task without words or a word problem during four problem-solving sessions. Data…
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
International Nuclear Information System (INIS)
Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit
2017-01-01
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
Graph Grammar-Based Multi-Frontal Parallel Direct Solver for Two-Dimensional Isogeometric Analysis
Kuźnik, Krzysztof
2012-06-02
This paper introduces the graph grammar based model for developing multi-thread multi-frontal parallel direct solver for two dimensional isogeometric finite element method. Execution of the solver algorithm has been expressed as the sequence of graph grammar productions. At the beginning productions construct the elimination tree with leaves corresponding to finite elements. Following sequence of graph grammar productions generates element frontal matri-ces at leaf nodes, merges matrices at parent nodes and eliminates rows corresponding to fully assembled degrees of freedom. Finally, there are graph grammar productions responsible for root problem solution and recursive backward substitutions. Expressing the solver algorithm by graph grammar productions allows us to explore the concurrency of the algorithm. The graph grammar productions are grouped into sets of independent tasks that can be executed concurrently. The resulting concurrent multi-frontal solver algorithm is implemented and tested on NVIDIA GPU, providing O(NlogN) execution time complexity where N is the number of degrees of freedom. We have confirmed this complexity by solving up to 1 million of degrees of freedom with 448 cores GPU.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
Energy Technology Data Exchange (ETDEWEB)
Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S. [Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Genovese, L. [University of Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Andreussi, O. [Institute of Computational Science, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6904 Lugano (Switzerland); Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland); Marzari, N. [Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland)
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS
Directory of Open Access Journals (Sweden)
Mr. Vladimir A. Smagin
2016-12-01
Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.
A coupled systems code-CFD MHD solver for fusion blanket design
Energy Technology Data Exchange (ETDEWEB)
Wolfendale, Michael J., E-mail: m.wolfendale11@imperial.ac.uk; Bluck, Michael J.
2015-10-15
Highlights: • A coupled systems code-CFD MHD solver for fusion blanket applications is proposed. • Development of a thermal hydraulic systems code with MHD capabilities is detailed. • A code coupling methodology based on the use of TCP socket communications is detailed. • Validation cases are briefly discussed for the systems code and coupled solver. - Abstract: The network of flow channels in a fusion blanket can be modelled using a 1D thermal hydraulic systems code. For more complex components such as junctions and manifolds, the simplifications employed in such codes can become invalid, requiring more detailed analyses. For magnetic confinement reactor blanket designs using a conducting fluid as coolant/breeder, the difficulties in flow modelling are particularly severe due to MHD effects. Blanket analysis is an ideal candidate for the application of a code coupling methodology, with a thermal hydraulic systems code modelling portions of the blanket amenable to 1D analysis, and CFD providing detail where necessary. A systems code, MHD-SYS, has been developed and validated against existing analyses. The code shows good agreement in the prediction of MHD pressure loss and the temperature profile in the fluid and wall regions of the blanket breeding zone. MHD-SYS has been coupled to an MHD solver developed in OpenFOAM and the coupled solver validated for test geometries in preparation for modelling blanket systems.
AQUASOL: An efficient solver for the dipolar Poisson-Boltzmann-Langevin equation.
Koehl, Patrice; Delarue, Marc
2010-02-14
The Poisson-Boltzmann (PB) formalism is among the most popular approaches to modeling the solvation of molecules. It assumes a continuum model for water, leading to a dielectric permittivity that only depends on position in space. In contrast, the dipolar Poisson-Boltzmann-Langevin (DPBL) formalism represents the solvent as a collection of orientable dipoles with nonuniform concentration; this leads to a nonlinear permittivity function that depends both on the position and on the local electric field at that position. The differences in the assumptions underlying these two models lead to significant differences in the equations they generate. The PB equation is a second order, elliptic, nonlinear partial differential equation (PDE). Its response coefficients correspond to the dielectric permittivity and are therefore constant within each subdomain of the system considered (i.e., inside and outside of the molecules considered). While the DPBL equation is also a second order, elliptic, nonlinear PDE, its response coefficients are nonlinear functions of the electrostatic potential. Many solvers have been developed for the PB equation; to our knowledge, none of these can be directly applied to the DPBL equation. The methods they use may adapt to the difference; their implementations however are PBE specific. We adapted the PBE solver originally developed by Holst and Saied [J. Comput. Chem. 16, 337 (1995)] to the problem of solving the DPBL equation. This solver uses a truncated Newton method with a multigrid preconditioner. Numerical evidences suggest that it converges for the DPBL equation and that the convergence is superlinear. It is found however to be slow and greedy in memory requirement for problems commonly encountered in computational biology and computational chemistry. To circumvent these problems, we propose two variants, a quasi-Newton solver based on a simplified, inexact Jacobian and an iterative self-consistent solver that is based directly on the PBE
International Nuclear Information System (INIS)
Na, Y. W.; Park, C. E.; Lee, S. Y.
2009-01-01
As a part of the Ministry of Knowledge Economy (MKE) project, 'Development of safety analysis codes for nuclear power plants', KOPEC has been developing the hydraulic solver code package applicable to the safety analyses of nuclear power plants (NPP's). The matrices of the hydraulic solver are usually sparse and may be asymmetric. In the earlier stage of this project, typical direct matrix solver packages MA48 and MA28 had been tested as matrix solver for the hydraulic solver code, SPACE. The selection was based on the reasonably reliable performance experience from their former version MA18 in RELAP computer code. In the later stage of this project, the iterative methodologies have been being tested in the SPACE code. Among a few candidate iterative solution methodologies tested so far, the biconjugate gradient stabilization methodology (BICGSTAB) has shown the best performance in the applicability test and in the application to the SPACE code. Regardless of all the merits of using the direct solver packages, there are some other aspects of tackling the iterative solution methodologies. The algorithm is much simpler and easier to handle. The potential problems related to the robustness of the iterative solution methodologies have been resolved by applying pre-conditioning methods adjusted and modified as appropriate to the application in the SPACE code. The application strategy of conjugate gradient method was introduced in detail by Schewchuk, Golub and Saad in the middle of 1990's. The application of his methodology to nuclear engineering in Korea started about the same time and is still going on and there are quite a few examples of application to neutronics. Besides, Yang introduced a conjugate gradient method programmed in C++ language. The purpose of this study is to assess the performance and behavior of the iterative solution methodology compared to those of the direct solution methodology still being preferred due to its robustness and reliability. The
Uysal, Ismail Enes
2016-10-01
Plasmonic structures are utilized in many applications ranging from bio-medicine to solar energy generation and transfer. Numerical schemes capable of solving equations of classical electrodynamics have been the method of choice for characterizing scattering properties of such structures. However, as dimensions of these plasmonic structures reduce to nanometer scale, quantum mechanical effects start to appear. These effects cannot be accurately modeled by available classical numerical methods. One of these quantum effects is the tunneling, which is observed when two structures are located within a sub-nanometer distance of each other. At these small distances electrons “jump" from one structure to another and introduce a path for electric current to flow. Classical equations of electrodynamics and the schemes used for solving them do not account for this additional current path. This limitation can be lifted by introducing an auxiliary tunnel with material properties obtained using quantum models and applying a classical solver to the structures connected by this auxiliary tunnel. Early work on this topic focused on quantum models that are generated using a simple one-dimensional wave function to find the tunneling probability and assume a simple Drude model for the permittivity of the tunnel. These tunnel models are then used together with a classical frequency domain solver. In this thesis, a time domain surface integral equation solver for quantum corrected analysis of transient plasmonic interactions is proposed. This solver has several advantages: (i) As opposed to frequency domain solvers, it provides results at a broad band of frequencies with a single simulation. (ii) As opposed to differential equation solvers, it only discretizes surfaces (reducing number of unknowns), enforces the radiation condition implicitly (increasing the accuracy), and allows for time step selection independent of spatial discretization (increasing efficiency). The quantum model
Directory of Open Access Journals (Sweden)
Jürgen eSchmidhuber
2013-06-01
Full Text Available Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011 continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [58, 56, 57].
Linear systems solvers - recent developments and implications for lattice computations
International Nuclear Information System (INIS)
Frommer, A.
1996-01-01
We review the numerical analysis' understanding of Krylov subspace methods for solving (non-hermitian) systems of equations and discuss its implications for lattice gauge theory computations using the example of the Wilson fermion matrix. Our thesis is that mature methods like QMR, BiCGStab or restarted GMRES are close to optimal for the Wilson fermion matrix. Consequently, preconditioning appears to be the crucial issue for further improvements. (orig.)
Fauteux-Daniel, Sébastien; Larouche, Ariane; Calderon, Virginie; Boulais, Jonathan; Béland, Chanel; Ransy, Doris G; Boucher, Marc; Lamarre, Valérie; Lapointe, Normand; Boucoiran, Isabelle; Le Campion, Armelle; Soudeyns, Hugo
2017-12-01
Hepatitis C virus (HCV) can be transmitted from mother to child during pregnancy and childbirth. However, the timing and precise biological mechanisms that are involved in this process are incompletely understood, as are the determinants that influence transmission of particular HCV variants. Here we report results of a longitudinal assessment of HCV quasispecies diversity and composition in 5 cases of vertical HCV transmission, including 3 women coinfected with human immunodeficiency virus type 1 (HIV-1). The population structure of HCV variant spectra based on E2 envelope gene sequences (nucleotide positions 1491 to 1787), including hypervariable regions 1 and 2, was characterized using next-generation sequencing and median-joining network analysis. Compatible with a loose transmission bottleneck, larger numbers of shared HCV variants were observed in the presence of maternal coinfection. Coalescent Bayesian Markov chain Monte Carlo simulations revealed median times of transmission between 24.9 weeks and 36.1 weeks of gestation, with some confidence intervals ranging into the 1st trimester, considerably earlier than previously thought. Using recombinant autologous HCV pseudoparticles, differences were uncovered in HCV-specific antibody responses between coinfected mothers and mothers infected with HCV alone, in whom generalized absence of neutralization was observed. Finally, shifts in HCV quasispecies composition were seen in children around 1 year of age, compatible with the disappearance of passively transferred maternal immunoglobulins and/or the development of HCV-specific humoral immunity. Taken together, these results provide insights into the timing, dynamics, and biologic mechanisms involved in vertical HCV transmission and inform preventative strategies. IMPORTANCE Although it is well established that hepatitis C virus (HCV) can be transmitted from mother to child, the manner and the moment at which transmission operates have been the subject of
Poisson solvers for self-consistent multi-particle simulations
International Nuclear Information System (INIS)
Qiang, J; Paret, S
2014-01-01
Self-consistent multi-particle simulation plays an important role in studying beam-beam effects and space charge effects in high-intensity beams. The Poisson equation has to be solved at each time-step based on the particle density distribution in the multi-particle simulation. In this paper, we review a number of numerical methods that can be used to solve the Poisson equation efficiently. The computational complexity of those numerical methods will be O(N log(N)) or O(N) instead of O(N2), where N is the total number of grid points used to solve the Poisson equation
International Nuclear Information System (INIS)
Devals, C; Zhang, Y; Dompierre, J; Guibault, F; Vu, T C; Mangani, L
2014-01-01
Nowadays, computational fluid dynamics is commonly used by design engineers to evaluate and compare losses in hydraulic components as it is less expensive and less time consuming than model tests. For that purpose, an automatic tool for casing and distributor analysis will be presented in this paper. An in-house mesh generator and a Reynolds Averaged Navier-Stokes equation solver using the standard k-ω SST turbulence model will be used to perform all computations. Two solvers based on the C++ OpenFOAM library will be used and compared to a commercial solver. The performance of the new fully coupled block solver developed by the University of Lucerne and Andritz will be compared to the standard 1.6ext segregated simpleFoam solver and to a commercial solver. In this study, relative comparisons of different geometries of casing and distributor will be performed. The present study is thus aimed at validating the block solver and the tool chain and providing design engineers with a faster and more reliable analysis tool that can be integrated into their design process
The Quantum Mechanics Solver: How to Apply Quantum Theory to Modern Physics, 2nd edition
International Nuclear Information System (INIS)
Robbin, J M
2007-01-01
he hallmark of a good book of problems is that it allows you to become acquainted with an unfamiliar topic quickly and efficiently. The Quantum Mechanics Solver fits this description admirably. The book contains 27 problems based mainly on recent experimental developments, including neutrino oscillations, tests of Bell's inequality, Bose-Einstein condensates, and laser cooling and trapping of atoms, to name a few. Unlike many collections, in which problems are designed around a particular mathematical method, here each problem is devoted to a small group of phenomena or experiments. Most problems contain experimental data from the literature, and readers are asked to estimate parameters from the data, or compare theory to experiment, or both. Standard techniques (e.g., degenerate perturbation theory, addition of angular momentum, asymptotics of special functions) are introduced only as they are needed. The style is closer to a non-specialist seminar rather than an undergraduate lecture. The physical models are kept simple; the emphasis is on cultivating conceptual and qualitative understanding (although in many of the problems, the simple models fit the data quite well). Some less familiar theoretical techniques are introduced, e.g. a variational method for lower (not upper) bounds on ground-state energies for many-body systems with two-body interactions, which is then used to derive a surprisingly accurate relation between baryon and meson masses. The exposition is succinct but clear; the solutions can be read as worked examples if you don't want to do the problems yourself. Many problems have additional discussion on limitations and extensions of the theory, or further applications outside physics (e.g., the accuracy of GPS positioning in connection with atomic clocks; proton and ion tumor therapies in connection with the Bethe-Bloch formula for charged particles in solids). The problems use mainly non-relativistic quantum mechanics and are organised into three
Gas to Power in China. Gas-fired Power in China. Clearing the policy bottleneck
International Nuclear Information System (INIS)
Chen, Xavier
2005-12-01
-to-be-defined elusive competitive power pooling system. This makes it difficult for gas-fired power plants to fulfil their obligations with the gas suppliers under the long-term take-or-pay gas sales contracts. It also increases the perceived risks of the Chinese market. Last but not least, gas-fired power is such a new phenomenon in the coal-dominated market that power sector professionals have a limited understanding of the gas economics. This led them to treat gas-fired power simply as fossil-fuel based generation source. In addition to the difficulties in sourcing LNG for the proposed LNG projects, lack of a clear supportive policy for gas-fired power at the initial stage of China's gas market development also casts serious doubts about the country's ambitious gas market development plans. To resolve those conflicting issues facing gas-fired power, the Chinese government needs to make a policy pronouncement on gas-fired power in the context of the overall national energy strategies and policies. The key may lie with a differentiated and flexible approach that recognises both the difficulties of the power reform process and the urgency of clearing the policy bottleneck on gas-fired power
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver
Kestener, Pierre
2017-10-01
RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.
A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities
International Nuclear Information System (INIS)
2015-01-01
ACE3P is a 3D massively parallel simulation suite that developed at SLAC National Accelerator Laboratory that can perform coupled electromagnetic, thermal and mechanical study. Effectively utilizing supercomputer resources, ACE3P has become a key simulation tool for particle accelerator R and D. A new frequency domain solver to perform mechanical harmonic response analysis of accelerator components is developed within the existing parallel framework. This solver is designed to determine the frequency response of the mechanical system to external harmonic excitations for time-efficient accurate analysis of the large-scale problems. Coupled with the ACE3P electromagnetic modules, this capability complements a set of multi-physics tools for a comprehensive study of microphonics in superconducting accelerating cavities in order to understand the RF response and feedback requirements for the operational reliability of a particle accelerator. (auth)
A Generic High-performance GPU-based Library for PDE solvers
DEFF Research Database (Denmark)
Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter
, the privilege of high-performance parallel computing is now in principle accessible for many scientific users, no matter their economic resources. Though being highly effective units, GPUs and parallel architectures in general, pose challenges for software developers to utilize their efficiency. Sequential...... legacy codes are not always easily parallelized and the time spent on conversion might not pay o in the end. We present a highly generic C++ library for fast assembling of partial differential equation (PDE) solvers, aiming at utilizing the computational resources of GPUs. The library requires a minimum...... of GPU computing knowledge, while still oering the possibility to customize user-specic solvers at kernel level if desired. Spatial dierential operators are based on matrix free exible order nite dierence approximations. These matrix free operators minimize both memory consumption and main memory access...
Constraint Solver Techniques for Implementing Precise and Scalable Static Program Analysis
DEFF Research Database (Denmark)
Zhang, Ye
solver using unification we could make a program analysis easier to design and implement, much more scalable, and still as precise as expected. We present an inclusion constraint language with the explicit equality constructs for specifying program analysis problems, and a parameterized framework...... developers to build reliable software systems more quickly and with fewer bugs or security defects. While designing and implementing a program analysis remains a hard work, making it both scalable and precise is even more challenging. In this dissertation, we show that with a general inclusion constraint...... data flow analyses for C language, we demonstrate a large amount of equivalences could be detected by off-line analyses, and they could then be used by a constraint solver to significantly improve the scalability of an analysis without sacrificing any precision....
A fast, high-order solver for the Grad–Shafranov equation
International Nuclear Information System (INIS)
Pataki, Andras; Cerfon, Antoine J.; Freidberg, Jeffrey P.; Greengard, Leslie; O’Neil, Michael
2013-01-01
We present a new fast solver to calculate fixed-boundary plasma equilibria in toroidally axisymmetric geometries. By combining conformal mapping with Fourier and integral equation methods on the unit disk, we show that high-order accuracy can be achieved for the solution of the equilibrium equation and its first and second derivatives. Smooth arbitrary plasma cross-sections as well as arbitrary pressure and poloidal current profiles are used as initial data for the solver. Equilibria with large Shafranov shifts can be computed without difficulty. Spectral convergence is demonstrated by comparing the numerical solution with a known exact analytic solution. A fusion-relevant example of an equilibrium with a pressure pedestal is also presented
Solving non-linear Horn clauses using a linear Horn clause solver
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick; Ganty, Pierre
2016-01-01
In this paper we show that checking satisfiability of a set of non-linear Horn clauses (also called a non-linear Horn clause program) can be achieved using a solver for linear Horn clauses. We achieve this by interleaving a program transformation with a satisfiability checker for linear Horn...... clauses (also called a solver for linear Horn clauses). The program transformation is based on the notion of tree dimension, which we apply to a set of non-linear clauses, yielding a set whose derivation trees have bounded dimension. Such a set of clauses can be linearised. The main algorithm...... dimension. We constructed a prototype implementation of this approach and performed some experiments on a set of verification problems, which shows some promise....
Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai
2014-10-20
We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.
SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems
Energy Technology Data Exchange (ETDEWEB)
Li, Xiaoye S.; Demmel, James W.
2002-03-27
In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.
Analysis of transient plasmonic interactions using an MOT-PMCHWT integral equation solver
Uysal, Ismail Enes
2014-07-01
Device design involving metals and dielectrics at nano-scales and optical frequencies calls for simulation tools capable of analyzing plasmonic interactions. To this end finite difference time domain (FDTD) and finite element methods have been used extensively. Since these methods require volumetric meshes, the discretization size should be very small to accurately resolve fast-decaying fields in the vicinity of metal/dielectric interfaces. This can be avoided using integral equation (IE) techniques that discretize only on the interfaces. Additionally, IE solvers implicitly enforce the radiation condition and consequently do not need (approximate) absorbing boundary conditions. Despite these advantages, IE solvers, especially in time domain, have not been used for analyzing plasmonic interactions.
Linear optical response of finite systems using multishift linear system solvers
Energy Technology Data Exchange (ETDEWEB)
Hübener, Hannes; Giustino, Feliciano [Department of Materials, University of Oxford, Oxford OX1 3PH (United Kingdom)
2014-07-28
We discuss the application of multishift linear system solvers to linear-response time-dependent density functional theory. Using this technique the complete frequency-dependent electronic density response of finite systems to an external perturbation can be calculated at the cost of a single solution of a linear system via conjugate gradients. We show that multishift time-dependent density functional theory yields excitation energies and oscillator strengths in perfect agreement with the standard diagonalization of the response matrix (Casida's method), while being computationally advantageous. We present test calculations for benzene, porphin, and chlorophyll molecules. We argue that multishift solvers may find broad applicability in the context of excited-state calculations within density-functional theory and beyond.
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
Steady-State Anderson Accelerated Coupling of Lattice Boltzmann and Navier–Stokes Solvers
Atanasov, Atanas
2016-10-17
We present an Anderson acceleration-based approach to spatially couple three-dimensional Lattice Boltzmann and Navier–Stokes (LBNS) flow simulations. This allows to locally exploit the computational features of both fluid flow solver approaches to the fullest extent and yields enhanced control to match the LB and NS degrees of freedom within the LBNS overlap layer. Designed for parallel Schwarz coupling, the Anderson acceleration allows for the simultaneous execution of both Lattice Boltzmann and Navier–Stokes solver. We detail our coupling methodology, validate it, and study convergence and accuracy of the Anderson accelerated coupling, considering three steady-state scenarios: plane channel flow, flow around a sphere and channel flow across a porous structure. We find that the Anderson accelerated coupling yields a speed-up (in terms of iteration steps) of up to 40% in the considered scenarios, compared to strictly sequential Schwarz coupling.
Pyrolysis and gasification of single biomass particle – new openFoam solver
International Nuclear Information System (INIS)
Kwiatkowski, K; Zuk, P J; Bajer, K; Dudyński, M
2014-01-01
We present a new solver biomassGasificationFoam that extended the functionalities of the well-supported open-source CFD code OpenFOAM. The main goal of this development is to provide a comprehensive computational environment for a wide range of applications involving reacting gases and solids. The biomassGasificationFoam is an integrated solver capable of modelling thermal conversion, including evaporation, pyrolysis, gasification, and combustion, of various solid materials. In the paper we show that the gas is hotter than the solid except at the centre of the sample, where the temperature of the solid is higher. This effect is expected because the thermal conductivity of the porous matrix of the solid phase is higher than the thermal conductivity of the gases. This effect, which cannot be considered if thermal equilibrium between the gas and solid is assumed, leads to precise description of heat transfer into wood particles.
Sayed, Sadeed Bin; Uysal, Ismail Enes; Bagci, Hakan; Ulku, H. Arda
2018-01-01
Quantum tunneling is observed between two nanostructures that are separated by a sub-nanometer gap. Electrons “jumping” from one structure to another create an additional current path. An auxiliary tunnel is introduced between the two structures as a support for this so that a classical electromagnetic solver can account for the effects of quantum tunneling. The dispersive permittivity of the tunnel is represented by a Drude model, whose parameters are obtained from the electron tunneling probability. The transient scattering from the connected nanostructures (i.e., nanostructures plus auxiliary tunnel) is analyzed using a time domain volume integral equation solver. Numerical results demonstrating the effect of quantum tunneling on the scattered fields are provided.
Essential imposition of Neumann condition in Galerkin-Legendre elliptic solvers
Auteri, F; Quartapelle, L
2003-01-01
A new Galerkin-Legendre direct spectral solver for the Neumann problem associated with Laplace and Helmholtz operators in rectangular domains is presented. The algorithm differs from other Neumann spectral solvers by the high sparsity of the matrices, exploited in conjunction with the direct product structure of the problem. The homogeneous boundary condition is satisfied exactly by expanding the unknown variable into a polynomial basis of functions which are built upon the Legendre polynomials and have a zero slope at the interval extremes. A double diagonalization process is employed pivoting around the eigenstructure of the pentadiagonal mass matrices in both directions, instead of the full stiffness matrices encountered in the classical variational formulation of the problem with a weak natural imposition of the derivative boundary condition. Nonhomogeneous Neumann data are accounted for by means of a lifting. Numerical results are given to illustrate the performance of the proposed spectral elliptic solv...
Identification of severe wind conditions using a Reynolds Averaged Navier-Stokes solver
International Nuclear Information System (INIS)
Soerensen, N N; Bechmann, A; Johansen, J; Myllerup, L; Botha, P; Vinther, S; Nielsen, B S
2007-01-01
The present paper describes the application of a Navier-Stokes solver to predict the presence of severe flow conditions in complex terrain, capturing conditions that may be critical to the siting of wind turbines in the terrain. First it is documented that the flow solver is capable of predicting the flow in the complex terrain by comparing with measurements from two meteorology masts. Next, it is illustrated how levels of turbulent kinetic energy can be used to easily identify areas with severe flow conditions, relying on a high correlation between high turbulence intensity and severe flow conditions, in the form of high wind shear and directional shear which may seriously lower the lifetime of a wind turbine
Energy Technology Data Exchange (ETDEWEB)
Toumi, I.; Kumbaro, A.; Paillere, H
1999-07-01
These course notes, presented at the 30. Von Karman Institute Lecture Series in Computational Fluid Dynamics, give a detailed and through review of upwind differencing methods for two-phase flow models. After recalling some fundamental aspects of two-phase flow modelling, from mixture model to two-fluid models, the mathematical properties of the general 6-equation model are analysed by examining the Eigen-structure of the system, and deriving conditions under which the model can be made hyperbolic. The following chapters are devoted to extensions of state-of-the-art upwind differencing schemes such as Roe's Approximate Riemann Solver or the Characteristic Flux Splitting method to two-phase flow. Non-trivial steps in the construction of such solvers include the linearization, the treatment of non-conservative terms and the construction of a Roe-type matrix on which the numerical dissipation of the schemes is based. Extension of the 1-D models to multi-dimensions in an unstructured finite volume formulation is also described; Finally, numerical results for a variety of test-cases are shown to illustrate the accuracy and robustness of the methods. (authors)
Directory of Open Access Journals (Sweden)
Ricardo França Santos
2012-01-01
Full Text Available This work tries to solve a typical logistics problem of Navy of Brazil regards the allocation, transportation and distribution of genera refrigerated for Military Organizations within Grande Rio (RJ. After a brief review of literature on Linear/Integer Programming and some of their applications, we proposed the use of Integer Programming, using the Excel’s Solver as a tool for obtaining the optimal load configuration for the fleet, obtaining the lower distribution costs in order to meet the demand schedule. The assumptions were met in a first attempt with a single spreadsheet, but it could not find a convergent solution, without degeneration problems and with a reasonable solution time. A second solution was proposed separating the problem into three phases, which allowed us to highlight the potential and limitations of the Solver tool. This study showed the importance of formulating a realistic model and of a detailed critical analysis, which could be seen through the lack of convergence of the first solution and the success achieved by the second one.
The Quantum Mechanics Solver How to Apply Quantum Theory to Modern Physics
Basdevant, Jean-Louis
2006-01-01
The Quantum Mechanics Solver grew from topics which are part of the final examination in quantum theory at the Ecole Polytechnique at Palaiseau near Paris, France. The aim of the text is to guide the student towards applying quantum mechanics to research problems in fields such as atomic and molecular physics, condensed matter physics, and laser physics. Advanced undergraduates and graduate students will find a rich and challenging source for improving their skills in this field.
The value of continuity: Refined isogeometric analysis and fast direct solvers
Garcia, Daniel
2016-08-26
We propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce . C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method
libmpdata++ 1.0: a library of parallel MPDATA solvers for systems of generalised transport equations
Jaruga, A.; Arabas, S.; Jarecka, D.; Pawlowska, H.; Smolarkiewicz, P. K.; Waruszewski, M.
2015-04-01
This paper accompanies the first release of libmpdata++, a C++ library implementing the multi-dimensional positive-definite advection transport algorithm (MPDATA) on regular structured grid. The library offers basic numerical solvers for systems of generalised transport equations. The solvers are forward-in-time, conservative and non-linearly stable. The libmpdata++ library covers the basic second-order-accurate formulation of MPDATA, its third-order variant, the infinite-gauge option for variable-sign fields and a flux-corrected transport extension to guarantee non-oscillatory solutions. The library is equipped with a non-symmetric variational elliptic solver for implicit evaluation of pressure gradient terms. All solvers offer parallelisation through domain decomposition using shared-memory parallelisation. The paper describes the library programming interface, and serves as a user guide. Supported options are illustrated with benchmarks discussed in the MPDATA literature. Benchmark descriptions include code snippets as well as quantitative representations of simulation results. Examples of applications include homogeneous transport in one, two and three dimensions in Cartesian and spherical domains; a shallow-water system compared with analytical solution (originally derived for a 2-D case); and a buoyant convection problem in an incompressible Boussinesq fluid with interfacial instability. All the examples are implemented out of the library tree. Regardless of the differences in the problem dimensionality, right-hand-side terms, boundary conditions and parallelisation approach, all the examples use the same unmodified library, which is a key goal of libmpdata++ design. The design, based on the principle of separation of concerns, prioritises the user and developer productivity. The libmpdata++ library is implemented in C++, making use of the Blitz++ multi-dimensional array containers, and is released as free/libre and open-source software.
libmpdata++ 0.1: a library of parallel MPDATA solvers for systems of generalised transport equations
Jaruga, A.; Arabas, S.; Jarecka, D.; Pawlowska, H.; Smolarkiewicz, P. K.; Waruszewski, M.
2014-11-01
This paper accompanies first release of libmpdata++, a C++ library implementing the Multidimensional Positive-Definite Advection Transport Algorithm (MPDATA). The library offers basic numerical solvers for systems of generalised transport equations. The solvers are forward-in-time, conservative and non-linearly stable. The libmpdata++ library covers the basic second-order-accurate formulation of MPDATA, its third-order variant, the infinite-gauge option for variable-sign fields and a flux-corrected transport extension to guarantee non-oscillatory solutions. The library is equipped with a non-symmetric variational elliptic solver for implicit evaluation of pressure gradient terms. All solvers offer parallelisation through domain decomposition using shared-memory parallelisation. The paper describes the library programming interface, and serves as a user guide. Supported options are illustrated with benchmarks discussed in the MPDATA literature. Benchmark descriptions include code snippets as well as quantitative representations of simulation results. Examples of applications include: homogeneous transport in one, two and three dimensions in Cartesian and spherical domains; shallow-water system compared with analytical solution (originally derived for a 2-D case); and a buoyant convection problem in an incompressible Boussinesq fluid with interfacial instability. All the examples are implemented out of the library tree. Regardless of the differences in the problem dimensionality, right-hand-side terms, boundary conditions and parallelisation approach, all the examples use the same unmodified library, which is a key goal of libmpdata++ design. The design, based on the principle of separation of concerns, prioritises the user and developer productivity. The libmpdata++ library is implemented in C++, making use of the Blitz++ multi-dimensional array containers, and is released as free/libre and open-source software.
The value of continuity: Refined isogeometric analysis and fast direct solvers
Garcia, Daniel; Pardo, David; Dalcin, Lisandro; Paszyński, Maciej; Collier, Nathan; Calo, Victor M.
2016-01-01
We propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce . C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method
Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance
International Nuclear Information System (INIS)
Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.
1999-01-01
Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient