WorldWideScience

Sample records for parallel engineering optimisation

  1. Parallel search engine optimisation and pay-per-click campaigns: A comparison of cost per acquisition

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger

    2017-07-01

    Full Text Available Background: It is imperative that commercial websites should rank highly in search engine result pages because these provide the main entry point to paying customers. There are two main methods to achieve high rankings: search engine optimisation (SEO and pay-per-click (PPC systems. Both require a financial investment – SEO mainly at the beginning, and PPC spread over time in regular amounts. If marketing budgets are applied in the wrong area, this could lead to losses and possibly financial ruin. Objectives: The objective of this research was to investigate, using three real-world case studies, the actual expenditure on and income from both SEO and PPC systems. These figures were then compared, and specifically, the cost per acquisition (CPA was used to decide which system yielded the best results. Methodology: Three diverse websites were chosen, and analytics data for all three were compared over a 3-month period. Calculations were performed to reduce the figures to single ratios, to make comparisons between them possible. Results: Some of the resultant ratios varied widely between websites. However, the CPA was shown to be on average 52.1 times lower for SEO than for PPC systems. Conclusion: It was concluded that SEO should be the marketing system of preference for e-commerce-based websites. However, there are cases where PPC would yield better results – when instant traffic is required, and when a large initial expenditure is not possible.

  2. Optimising a parallel conjugate gradient solver

    Energy Technology Data Exchange (ETDEWEB)

    Field, M.R. [O`Reilly Institute, Dublin (Ireland)

    1996-12-31

    This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

  3. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  4. Comparing and Optimising Parallel Haskell Implementations for Multicore Machines

    DEFF Research Database (Denmark)

    Berthold, Jost; Marlow, Simon; Hammond, Kevin

    2009-01-01

    In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects running on multicore machines. GpH and Eden are both constructed using the highly-optimising sequential GHC compiler, and share thread scheduling, and other elements, from a common code base. The ...

  5. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  6. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Optimisation of Multilayer Insulation an Engineering Approach

    CERN Document Server

    Chorowski, M; Parente, C; Riddone, G

    2001-01-01

    A mathematical model has been developed to describe the heat flux through multilayer insulation (MLI). The total heat flux between the layers is the result of three distinct heat transfer modes: radiation, residual gas conduction and solid spacer conduction. The model describes the MLI behaviour considering a layer-to-layer approach and is based on an electrical analogy, in which the three heat transfer modes are treated as parallel thermal impedances. The values of each of the transfer mode vary from layer to layer, although the total heat flux remains constant across the whole MLI blanket. The model enables the optimisation of the insulation with regard to different MLI parameters, such as residual gas pressure, number of layers and boundary temperatures. The model has been tested with experimental measurements carried out at CERN and the results revealed to be in a good agreement, especially for insulation vacuum between 10-5 Pa and 10-3 Pa.

  8. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  9. Biorefinery plant design, engineering and process optimisation

    DEFF Research Database (Denmark)

    Holm-Nielsen, Jens Bo; Ehimen, Ehiazesebhor Augustine

    2014-01-01

    Before new biorefinery systems can be implemented, or the modification of existing single product biomass processing units into biorefineries can be carried out, proper planning of the intended biorefinery scheme must be performed initially. This chapter outlines design and synthesis approaches...... applicable for the planning and upgrading of intended biorefinery systems, and includes discussions on the operation of an existing lignocellulosic-based biorefinery platform. Furthermore, technical considerations and tools (i.e., process analytical tools) which could be applied to optimise the operations...... of existing and potential biorefinery plants are elucidated....

  10. Optimising parallel R correlation matrix calculations on gene expression data using MapReduce.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Johnson, David; Emam, Ibrahim; Guitton, Florian; Oehmichen, Axel; Guo, Yike

    2014-11-05

    High-throughput molecular profiling data has been used to improve clinical decision making by stratifying subjects based on their molecular profiles. Unsupervised clustering algorithms can be used for stratification purposes. However, the current speed of the clustering algorithms cannot meet the requirement of large-scale molecular data due to poor performance of the correlation matrix calculation. With high-throughput sequencing technologies promising to produce even larger datasets per subject, we expect the performance of the state-of-the-art statistical algorithms to be further impacted unless efforts towards optimisation are carried out. MapReduce is a widely used high performance parallel framework that can solve the problem. In this paper, we evaluate the current parallel modes for correlation calculation methods and introduce an efficient data distribution and parallel calculation algorithm based on MapReduce to optimise the correlation calculation. We studied the performance of our algorithm using two gene expression benchmarks. In the micro-benchmark, our implementation using MapReduce, based on the R package RHIPE, demonstrates a 3.26-5.83 fold increase compared to the default Snowfall and 1.56-1.64 fold increase compared to the basic RHIPE in the Euclidean, Pearson and Spearman correlations. Though vanilla R and the optimised Snowfall outperforms our optimised RHIPE in the micro-benchmark, they do not scale well with the macro-benchmark. In the macro-benchmark the optimised RHIPE performs 2.03-16.56 times faster than vanilla R. Benefiting from the 3.30-5.13 times faster data preparation, the optimised RHIPE performs 1.22-1.71 times faster than the optimised Snowfall. Both the optimised RHIPE and the optimised Snowfall successfully performs the Kendall correlation with TCGA dataset within 7 hours. Both of them conduct more than 30 times faster than the estimated vanilla R. The performance evaluation found that the new MapReduce algorithm and its

  11. An effective approach to reducing strategy space for maintenance optimisation of multistate series–parallel systems

    International Nuclear Information System (INIS)

    Zhou, Yifan; Lin, Tian Ran; Sun, Yong; Bian, Yangqing; Ma, Lin

    2015-01-01

    Maintenance optimisation of series–parallel systems is a research topic of practical significance. Nevertheless, a cost-effective maintenance strategy is difficult to obtain due to the large strategy space for maintenance optimisation of such systems. The heuristic algorithm is often employed to deal with this problem. However, the solution obtained by the heuristic algorithm is not always the global optimum and the algorithm itself can be very time consuming. An alternative method based on linear programming is thus developed in this paper to overcome such difficulties by reducing strategy space of maintenance optimisation. A theoretical proof is provided in the paper to verify that the proposed method is at least as effective as the existing methods for strategy space reduction. Numerical examples for maintenance optimisation of series–parallel systems having multistate components and considering both economic dependence among components and multiple-level imperfect maintenance are also presented. The simulation results confirm that the proposed method is more effective than the existing methods in removing inappropriate maintenance strategies of multistate series–parallel systems. - Highlights: • A new method using linear programming is developed to reduce the strategy space. • The effectiveness of the new method for strategy reduction is theoretically proved. • Imperfect maintenance and economic dependence are considered during optimisation

  12. Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling

    International Nuclear Information System (INIS)

    Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.

    2003-01-01

    In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)

  13. Parallel shooting methods for finding steady state solutions to engine simulation models

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2007-01-01

    Parallel single- and multiple shooting methods were tested for finding periodic steady state solutions to a Stirling engine model. The model was used to illustrate features of the methods and possibilities for optimisations. Performance was measured using simulation of an experimental data set...

  14. Efficient Parallel Engineering Computing on Linux Workstations

    Science.gov (United States)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  15. Aspects of parallel processing and control engineering

    OpenAIRE

    McKittrick, Brendan J

    1991-01-01

    The concept of parallel processing is not a new one, but the application of it to control engineering tasks is a relatively recent development, made possible by contemporary hardware and software innovation. It has long been accepted that, if properly orchestrated several processors/CPUs when combined can form a powerful processing entity. What prevented this from being implemented in commercial systems was the adequacy of the microprocessor for most tasks and hence the expense of a multi-pro...

  16. Economic and Mathematical Modelling of Optimisation of Transaction Expenses of Engineering Enterprises

    OpenAIRE

    Makaliuk Iryna V.

    2014-01-01

    The article identifies stages of the process of optimisation of transaction expenses. It develops an economic and mathematical model of optimisation of transaction expenses of engineering enterprises by the criterion of maximisation of income from realisation of products and system of restrictions, which envisages exceeding income growth rate over the expenses growth rate. The article offers to use types of expenses by accounting accounts as indicators of transaction expenses. In the result o...

  17. An empirical study on website usability elements and how they affect search engine optimisation

    Directory of Open Access Journals (Sweden)

    Eugene B. Visser

    2011-03-01

    Full Text Available The primary objective of this research project was to identify and investigate the website usability attributes which are in contradiction with search engine optimisation elements. The secondary objective was to determine if these usability attributes affect conversion. Although the literature review identifies the contradictions, experts disagree about their existence.An experiment was conducted, whereby the conversion and/or traffic ratio results of an existing control website were compared to a usability-designed version of the control website,namely the experimental website. All optimisation elements were ignored, thus implementing only usability. The results clearly show that inclusion of the usability attributes positively affect conversion,indicating that usability is a prerequisite for effective website design. Search engine optimisation is also a prerequisite for the very reason that if a website does not rank on the first page of the search engine result page for a given keyword, then that website might as well not exist. According to this empirical work, usability is in contradiction to search engine optimisation best practices. Therefore the two need to be weighed up in terms of importance towards search engines and visitors.

  18. Parallel science and engineering applications the Charm++ approach

    CERN Document Server

    Kale, Laxmikant V

    2016-01-01

    Developed in the context of science and engineering applications, with each abstraction motivated by and further honed by specific application needs, Charm++ is a production-quality system that runs on almost all parallel computers available. Parallel Science and Engineering Applications: The Charm++ Approach surveys a diverse and scalable collection of science and engineering applications, most of which are used regularly on supercomputers by scientists to further their research. After a brief introduction to Charm++, the book presents several parallel CSE codes written in the Charm++ model, along with their underlying scientific and numerical formulations, explaining their parallelization strategies and parallel performance. These chapters demonstrate the versatility of Charm++ and its utility for a wide variety of applications, including molecular dynamics, cosmology, quantum chemistry, fracture simulations, agent-based simulations, and weather modeling. The book is intended for a wide audience of people i...

  19. Integrative Dynamic Reconfiguration in a Parallel Stream Processing Engine

    DEFF Research Database (Denmark)

    Madsen, Kasper Grud Skat; Zhou, Yongluan; Cao, Jianneng

    2017-01-01

    Load balancing, operator instance collocations and horizontal scaling are critical issues in Parallel Stream Processing Engines to achieve low data processing latency, optimized cluster utilization and minimized communication cost respectively. In previous work, these issues are typically tackled...... solution called ALBIC, which support general jobs. We implement the proposed techniques on top of Apache Storm, an open-source Parallel Stream Processing Engine. The extensive experimental results over both synthetic and real datasets show that our techniques clearly outperform existing approaches....

  20. Environmental optimisation of natural gas fired engines. Main report

    Energy Technology Data Exchange (ETDEWEB)

    Kvist, T. et al.

    2010-10-15

    The overall aim of the project has been to assess to which extent it is possible to reduce the emissions by adjusting the different engines examined and to determine the cost of the damage caused by emissions from natural gas combustion. However, only health and climate effects are included. The emissions of NO{sub x}, CO and UHC as well as the composition of the hydrocarbon emissions were measured for four different stationary lean-burn natural-gas fired engines installed at different combined heat and power (CHP) units in Denmark. The units were chosen to be representative of the natural gas fired engine-based power production in Denmark. The measurements showed that NO{sub x} emissions were relatively more sensitive to engine setting than UHC, CO and formaldehyde emissions. By reducing the NO{sub x} emissions to 40 % of the initial value (from 500 to 200 mg/m3(n) at 5 % O{sub 2}) the UHC emission was increased by 10 % to 50 % of the initial value. The electrical efficiency was reduced by 0.5 to 1.0 percentage point. Externalities in relation to power production are defined as the costs, which are not directly included in the price of the produced power. Health effects related to air pollution from power plants fall under this definition and usually dominate the results on external costs. For determination of these effects the exposure of the population, the impact of the exposure and the societal costs accompanying the impacts have been evaluated. As expected, it was found that when the engines are adjusted in order to reduce NO{sub x} emissions, the emission of UHC increases and vice versa. It was found that at high NO{sub x} emission levels (500 mg/m3{sub n} at 5 % O{sub 2}) the external costs related to the NO{sub x} emissions are 15 to 25 times the costs related to UHC emissions. At low NO{sub x} emission levels (200 mg/m3{sub n} at 5 % O{sub 2}) the costs related to NO{sub x} are 5 to 8 times the costs related to UHC emissions. Apparently, the harmfulness

  1. An empirical study on website usability elements and how they affect search engine optimisation

    OpenAIRE

    Eugene B. Visser; Melius Weideman

    2011-01-01

    The primary objective of this research project was to identify and investigate the website usability attributes which are in contradiction with search engine optimisation elements. The secondary objective was to determine if these usability attributes affect conversion. Although the literature review identifies the contradictions, experts disagree about their existence.An experiment was conducted, whereby the conversion and/or traffic ratio results of an existing control website were compared...

  2. Optimising the cam profile of an electronic unit pump for a heavy-duty diesel engine

    International Nuclear Information System (INIS)

    Qiu, Tao; Dai, Hefei; Lei, Yan; Cao, Chunlei; Li, Xuchu

    2015-01-01

    For a fuel system with a tangent cam or a constant-velocity cam, the peak injection pressure continues to rise as the injection duration increases, but overly high peak pressures induce mechanical loads and wear, limiting the maximum engine speed and injection quantity. To improve the performance of an EUP (Electronic Unit Pump) fuel system for heavy-duty diesel engines, this work proposes a new pump cam, namely the constant-pressure cam. It helps the EUP run at a higher speed and deliver larger fuel quantities while maintaining a constant peak injection pressure, which improves the power of the heavy-duty diesel engine. A model based on the EUP was built to determine the three constraints for optimising the constant-pressure cam: 1) the pump pressure should equal the nozzle pressure; 2) the cam speed should decrease with the increase in the injection duration; and 3) the cam acceleration gradient should be zero. An EUP system was tested with the tangent cam and the optimised cam under different conditions. The experimental results show that the EUP system with the optimised cam delivers more injection quantity and runs at higher engine speeds while maintaining the same peak pressure as the tangent cam. - Highlights: • We propose a constant-pressure cam to improve the power of heavy-duty diesel engine. • We deduce three constraints for the CP (constant-peak pressure) cam based on a model. • The EUP system with the new cam works well under higher engine speed. • The peak pressure of the constant-pressure cam fuel system maintains high

  3. Parallelization of Rocket Engine Simulator Software (PRESS)

    Science.gov (United States)

    Cezzar, Ruknet

    1998-01-01

    We have outlined our work in the last half of the funding period. We have shown how a demo package for RESSAP using MPI can be done. However, we also mentioned the difficulties with the UNIX platform. We have reiterated some of the suggestions made during the presentation of the progress of the at Fourth Annual HBCU Conference. Although we have discussed, in some detail, how TURBDES/PUMPDES software can be run in parallel using MPI, at present, we are unable to experiment any further with either MPI or PVM. Due to X windows not being implemented, we are also not able to experiment further with XPVM, which it will be recalled, has a nice GUI interface. There are also some concerns, on our part, about MPI being an appropriate tool. The best thing about MPr is that it is public domain. Although and plenty of documentation exists for the intricacies of using MPI, little information is available on its actual implementations. Other than very typical, somewhat contrived examples, such as Jacobi algorithm for solving Laplace's equation, there are few examples which can readily be applied to real situations, such as in our case. In effect, the review of literature on both MPI and PVM, and there is a lot, indicate something similar to the enormous effort which was spent on LISP and LISP-like languages as tools for artificial intelligence research. During the development of a book on programming languages [12], when we searched the literature for very simple examples like taking averages, reading and writing records, multiplying matrices, etc., we could hardly find a any! Yet, so much was said and done on that topic in academic circles. It appears that we faced the same problem with MPI, where despite significant documentation, we could not find even a simple example which supports course-grain parallelism involving only a few processes. From the foregoing, it appears that a new direction may be required for more productive research during the extension period (10/19/98 - 10

  4. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jérôme

    2015-05-22

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers with several thousands to millions of cores. In this paper, we present a hierarchical data structure for massive parallel computations that supports the coupling of a Navier–Stokes-based fluid flow code with the Boussinesq approximation in order to address complex thermal scenarios for energy-related assessments. The newly designed data structure is specifically designed with the idea of interactive data exploration and visualization during runtime of the simulation code; a major shortcoming of traditional high-performance computing (HPC) simulation codes. We further show and discuss speed-up values obtained on one of Germany’s top-ranked supercomputers with up to 140,000 processes and present simulation results for different engineering-based thermal problems.

  6. Tolerating correlated failures in Massively Parallel Stream Processing Engines

    DEFF Research Database (Denmark)

    Su, L.; Zhou, Y.

    2016-01-01

    Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint. On the o......Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint....... On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...

  7. Environmental optimisation of natural gas fired engines. Measurement on four different engines. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Kvist, T.

    2010-10-15

    The emissions of NO{sub x}, CO and UHC as well as the composition of the hydrocarbon emissions were measured for four different stationary lean burn natural gas fired engines installed at different combined heat and power (CHP) units in Denmark. The units have been chosen to be representative for the natural gas engine based on power production in Denmark. The NO{sub x} emissions were varied from around 200 to 500 mg/m3(n) by varying the ignition timing and the excess of air. For each of the examined engines measurements were conducted at different combinations of ignition timing and excess of air. The measurements showed the NO{sub x} emissions were relatively more sensitive to engine setting than UHC, CO and formaldehyde emissions. By reducing the NO{sub x} emissions to 40 % of the initial value (from 500 to 200 mg/m3(n)) the UHC emission were increased by 10 % to 50 % of the initial value. The electrical efficiency was reduced by 0,5 to 1,0 % point. (Author)

  8. Engineering Computer Games: A Parallel Learning Opportunity for Undergraduate Engineering and Primary (K-5 Students

    Directory of Open Access Journals (Sweden)

    Mark Michael Budnik

    2011-04-01

    Full Text Available In this paper, we present how our College of Engineering is developing a growing portfolio of engineering computer games as a parallel learning opportunity for undergraduate engineering and primary (grade K-5 students. Around the world, many schools provide secondary students (grade 6-12 with opportunities to pursue pre-engineering classes. However, by the time students reach this age, many of them have already determined their educational goals and preferred careers. Our College of Engineering is developing resources to provide primary students, still in their educational formative years, with opportunities to learn more about engineering. One of these resources is a library of engineering games targeted to the primary student population. The games are designed by sophomore students in our College of Engineering. During their Introduction to Computational Techniques course, the students use the LabVIEW environment to develop the games. This software provides a wealth of design resources for the novice programmer; using it to develop the games strengthens the undergraduates

  9. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  10. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jé rô me; Mundani, Ralf-Peter; Rank, Ernst; van Treeck, Christoph

    2015-01-01

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers

  11. Parallel multiphysics algorithms and software for computational nuclear engineering

    International Nuclear Information System (INIS)

    Gaston, D; Hansen, G; Kadioglu, S; Knoll, D A; Newman, C; Park, H; Permann, C; Taitano, W

    2009-01-01

    There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple code coupling or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE-based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.

  12. Teaching ethics to engineers: ethical decision making parallels the engineering design process.

    Science.gov (United States)

    Bero, Bridget; Kuhlman, Alana

    2011-09-01

    In order to fulfill ABET requirements, Northern Arizona University's Civil and Environmental engineering programs incorporate professional ethics in several of its engineering courses. This paper discusses an ethics module in a 3rd year engineering design course that focuses on the design process and technical writing. Engineering students early in their student careers generally possess good black/white critical thinking skills on technical issues. Engineering design is the first time students are exposed to "grey" or multiple possible solution technical problems. To identify and solve these problems, the engineering design process is used. Ethical problems are also "grey" problems and present similar challenges to students. Students need a practical tool for solving these ethical problems. The step-wise engineering design process was used as a model to demonstrate a similar process for ethical situations. The ethical decision making process of Martin and Schinzinger was adapted for parallelism to the design process and presented to students as a step-wise technique for identification of the pertinent ethical issues, relevant moral theories, possible outcomes and a final decision. Students had greatest difficulty identifying the broader, global issues presented in an ethical situation, but by the end of the module, were better able to not only identify the broader issues, but also to more comprehensively assess specific issues, generate solutions and a desired response to the issue.

  13. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands

    International Nuclear Information System (INIS)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G.; Graessner, J.; Reitmeier, F.; Jaehne, M.; Petersen, K.U.

    2005-01-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD±1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p 2 =0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p 2 =0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post-processing needed. To improve results of MR

  14. Effects of Pulsating Flow on Mass Flow Balance and Surge Margin in Parallel Turbocharged Engines

    OpenAIRE

    Thomasson, Andreas; Eriksson, Lars

    2015-01-01

    The paper extends a mean value model of a parallel turbocharged internal combustion engine with a crank angle resolved cylinder model. The result is a 0D engine model that includes the pulsating flow from the intake and exhaust valves. The model captures variations in turbo speed and pressure, and therefore variations in the compressor operating point, during an engine cycle. The model is used to study the effect of the pulsating flow on mass flow balance and surge margin in parallel turbocha...

  15. Design Patterns: establishing a discipline of parallel software engineering

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    Many core processors present us with a software challenge. We must turn our serial code into parallel code. To accomplish this wholesale transformation of our software ecosystem, we must define established practice is in parallel programming and then develop tools to support that practice. This leads to design patterns supported by frameworks optimized at runtime with advanced autotuning compilers. In this talk I provide an update of my ongoing research with the ParLab at UC Berkeley to realize this vision. In particular, I will describe our draft parallel pattern language, our early experiments with software frameworks, and the associated runtime optimization tools.About the speakerTim Mattson is a parallel programmer (Ph.D. Chemistry, UCSC, 1985). He does linear algebra, finds oil, shakes molecules, solves differential equations, and models electrons in simple atomic systems. He has spent his career working with computer scientists to make sure the needs of parallel applications programmers are met.Tim has ...

  16. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands; MR-Sialographie: Optimierung und Bewertung ultraschneller Sequenzen mit paralleler Bildgebung und oraler Stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G. [Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie, Universitaetsklinikum Hamburg-Eppendorf (Germany); Graessner, J. [Siemens Medical Systems, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [Kopf- und Hautzentrum, Klinik und Poliklinik fuer Hals-, Nasen- und Ohrenheilkunde, Universitaetsklinikum Hamburg-Eppendorf (Germany); Petersen, K.U. [Zentrum fuer Psychosoziale Medizin, Klinik und Poliklinik fuer Psychiatrie und Psychotherapie, Universitaetsklinikum Hamburg-Eppendorf (Germany)

    2005-04-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD{+-}1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p<0.001; {eta}{sup 2}=0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p<0.001; {eta}{sup 2}=0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post

  17. Advanced FDTD methods parallelization, acceleration, and engineering applications

    CERN Document Server

    Yu, Wenhua

    2011-01-01

    The finite-difference time-domain (FDTD) method has revolutionized antenna design and electromagnetics engineering. Here's a cutting-edge book that focuses on the performance optimization and engineering applications of FDTD simulation systems. Covering the latest developments in this area, this unique resource offer you expert advice on the FDTD method, hardware platforms, and network systems. Moreover the book offers guidance in distinguishing between the many different electromagnetics software packages on the market today. You also find a complete chapter dedicated to large multi-scale pro

  18. Resource optimised reconfigurable modular parallel pipelined stochastic approximation-based self-tuning regulator architecture with reduced latency

    Directory of Open Access Journals (Sweden)

    Varghese Mathew Vaidyan

    2015-09-01

    Full Text Available Present self-tuning regulator architectures based on recursive least-square estimation are computationally expensive and require large amount of resources and time in generating the first control signal due to computational bottlenecks imposed by the calculations involved in estimation stage, different stages of matrix multiplications and the number of intermediate variables at each iteration and precludes its use in applications that have fast required response times and those which run on embedded computing platforms with low-power or low-cost requirements with constraints on resource usage. A salient feature of this study is that a new modular parallel pipelined stochastic approximation-based self-tuning regulator architecture which reduces the time required to generate the first control signal, reduces resource usage and reduces the number of intermediate variables is proposed. Fast matrix multiplication, pipelining and high-speed arithmetic function implementations were used for improving the performance. Results of implementation demonstrate that the proposed architecture has an improvement in control signal generation time by 38% and reduction in resource usage by 41% in terms of multipliers and 44.4% in terms of adders compared with the best existing related work, opening up new possibilities for the application of online embedded self-tuning regulators.

  19. Emission Constrained Multiple-Pulse Fuel Injection Optimisation and Control for Fuel-Efficient Diesel Engines

    NARCIS (Netherlands)

    Luo, X.; Jager, B. de; Willems, F.P.T.

    2015-01-01

    With the application of multiple-pulse fuel injection profiles, the performance of diesel engines is enhanced in terms of low fuel consumption and low engine-out emission levels. However, the calibration effort increases due to a larger number of injection timing parameters. The difficulty of

  20. Emission constrained multiple-pulse fuel injection optimisation and control for fuel-efficient diesel engines

    NARCIS (Netherlands)

    Luo, X.; Jager, de A.G.; Willems, F.P.T.

    2015-01-01

    With the application of multiple-pulse fuel injec- tion profiles, the performance of diesel engines is enhanced in terms of low fuel consumption and low engine-out emission levels. However, the calibration effort increases due to a larger number of injection timing parameters. The difficulty of

  1. Analytical expression for an optimised link bar mechanism for a beta-type Stirling engine

    DEFF Research Database (Denmark)

    Carlsen, Henrik; Bovin, Jonas Kabell

    2007-01-01

    The design of a mechanism for kinematic beta-type Stirling engines, where the displacer piston and the working piston share the same cylinder, is complicated. A well-known solution is the rhombic drive, but this solution depends on oil lubrication because of the gear wheels connecting the two...... counter rotating crank shafts. In a hermetically sealed Stirling engine it is an advantage to avoid oil in the crank case, making the application of the rhombic drive difficult. In this paper, another crank mechanism is presented, which has been developed for a 9 kW single cylinder engine. The new crank...... mechanism is a further development of the mechanism in a previous 9 kW engine. The crank mechanism for the beta-type Stirling engine is based on two four-link straight line mechanisms pointing up and down, respectively. The mechanism pointing upwards is connected to the working piston, while the mechanism...

  2. Comparative case study on website traffic generated by search engine optimisation and a pay-per-click campaign, versus marketing expenditure

    OpenAIRE

    Wouter T. Kritzinger; Melius Weideman

    2015-01-01

    Background: No empirical work was found on how marketing expenses compare when used solely for either the one or the other of the two main types of search engine marketing. Objectives: This research set out to determine how the results of the implementation of a pay-per-click campaign compared to those of a search engine optimisation campaign, given the same website and environment. At the same time, the expenses incurred on both these marketing methods were recorded and compared. M...

  3. Experimental analysis of ethanol dual-fuel combustion in a heavy-duty diesel engine: An optimisation at low load

    International Nuclear Information System (INIS)

    Pedrozo, Vinícius B.; May, Ian; Dalla Nora, Macklini; Cairns, Alasdair; Zhao, Hua

    2016-01-01

    Highlights: • Dual-fuel combustion offers promising results on a stock heavy-duty diesel engine. • The use of split diesel injections extends the benefits of the dual-fuel mode. • Ethanol–diesel dual-fuel combustion results in high indicated efficiencies. • NOx and soot emissions are significantly reduced. • Combustion efficiency reaches 98% with an ethanol energy ratio of 53%. - Abstract: Conventional diesel combustion produces harmful exhaust emissions which adversely affect the air quality if not controlled by in-cylinder measures and exhaust aftertreatment systems. Dual-fuel combustion can potentially reduce the formation of nitrogen oxides (NOx) and soot which are characteristic of diesel diffusion flame. The in-cylinder blending of different fuels to control the charge reactivity allows for lower local equivalence ratios and temperatures. The use of ethanol, an oxygenated biofuel with high knock resistance and high latent heat of vaporisation, increases the reactivity gradient. In addition, renewable biofuels can provide a sustainable alternative to petroleum-based fuels as well as reduce greenhouse gas emissions. However, ethanol–diesel dual-fuel combustion suffers from poor engine efficiency at low load due to incomplete combustion. Therefore, experimental studies were carried out at 1200 rpm and 0.615 MPa indicated mean effective pressure on a heavy-duty diesel engine. Fuel delivery was in the form of port fuel injection of ethanol and common rail direct injection of diesel. The objective was to improve combustion efficiency, maximise ethanol substitution, and minimise NOx and soot emissions. Ethanol energy fractions up to 69% were explored in conjunction with the effect of different diesel injection strategies on combustion, emissions, and efficiency. Optimisation tests were performed for the optimum fuelling and diesel injection strategy. The resulting effects of exhaust gas recirculation, intake air pressure, and rail pressure were

  4. Comparison and combination of NLPQL and MOGA algorithms for a marine medium-speed diesel engine optimisation

    International Nuclear Information System (INIS)

    Hu, Nao; Zhou, Peilin; Yang, Jianguo

    2017-01-01

    Highlights: • NLPQL algorithm is not effective when used for seven engine parameters optimisation. • MOGA algorithm is time consuming but offers broader and finer solutions. • A better design is offered by NLPQL algorithm when using a start point from MOGA. • SOI has the dominant and clearly opposite effects on NOx and SFOC. • Late injection, low swirl and large spray angle can lower NOx and soot simultaneously. - Abstract: Seven engine design parameters were investigated by use of NLPQL algorithm and MOGA separately and together. Detailed comparisons were made on NOx, soot, SFOC, and also on the design parameters. Results indicate that NLPQL algorithm failed to approach optimal designs while MOGA offered more and better feasible Pareto designs. Then, an optimal design obtained by MOGA which has the trade-off between NOx and soot was set as the starting point of NLPQL algorithm. In this situation, an even better design with lower NOx and soot was approached. Combustion processes of the optimal designs were also disclosed and compared in detail. Late injection and small swirl were reckoned to be the main reasons for reducing NOx. In the end, RSM contour maps were applied in order to gain a better understanding of the sensitivity of import parameters on NOx, soot and SFOC.

  5. Parallel Hybrid Gas-Electric Geared Turbofan Engine Conceptual Design and Benefits Analysis

    Science.gov (United States)

    Lents, Charles; Hardin, Larry; Rheaume, Jonathan; Kohlman, Lee

    2016-01-01

    The conceptual design of a parallel gas-electric hybrid propulsion system for a conventional single aisle twin engine tube and wing vehicle has been developed. The study baseline vehicle and engine technology are discussed, followed by results of the hybrid propulsion system sizing and performance analysis. The weights analysis for the electric energy storage & conversion system and thermal management system is described. Finally, the potential system benefits are assessed.

  6. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    Science.gov (United States)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  7. Engine-start Control Strategy of P2 Parallel Hybrid Electric Vehicle

    Science.gov (United States)

    Xiangyang, Xu; Siqi, Zhao; Peng, Dong

    2017-12-01

    A smooth and fast engine-start process is important to parallel hybrid electric vehicles with an electric motor mounted in front of the transmission. However, there are some challenges during the engine-start control. Firstly, the electric motor must simultaneously provide a stable driving torque to ensure the drivability and a compensative torque to drag the engine before ignition. Secondly, engine-start time is a trade-off control objective because both fast start and smooth start have to be considered. To solve these problems, this paper first analyzed the resistance of the engine start process, and established a physic model in MATLAB/Simulink. Then a model-based coordinated control strategy among engine, motor and clutch was developed. Two basic control strategy during fast start and smooth start process were studied. Simulation results showed that the control objectives were realized by applying given control strategies, which can meet different requirement from the driver.

  8. Environmental optimisation of natural gas fired engines - calculation of health externalities

    Energy Technology Data Exchange (ETDEWEB)

    Frohn, L.M.; Becker, T.; Christensen, Jesper; Hertel, O.; Silver, J.D.; Villadsen, H. (Aarhus Univ., National Environmental Research Institute, Dept. of Atmospheric Environment, Roskilde (Denmark)); Soees Hansen, M. (Aarhus Univ., National Environmental Research Institute, Dept. of Policy Analysis, Roskilde (Denmark)); Skou Andersen, M. (European Environment Agency, Copenhagen (Denmark))

    2010-07-01

    The measured emissions of WP1 of the project has been applied as input for model calculations with the EVA model system. The DEHM model which calculates the regional scale delta-concentrations has been further developed to handle the low signal to noise ratio of the delta-concentrations related to the small sources that the gas fired engines constitute. All combinations of engine settings and locations have been run as scenarios with the EVA system, however the results have been grouped into themes to investigate changes related to location as well as changes related to engine settings. New exposure-response relations have been implemented in the system related to the chemical components nitrogen dioxide, formaldehyde, ethene and propene. The choice of high-exposure location in the calculations has unfortunately turned out to be less optimal. The location at Store Valby has previously been applied in studies with the EVA system as a high-exposure site, however in previous applications, the emission sources have been large power plants with stack heights of around 150 meters. The height of the stack of the gas fired engines is only around 30 meters, and the consequence is that the emitted components reach the surface closer to the stack, thereby giving high exposure in an area located further to the southwest, where the population density is not as high as in central Copenhagen. In general the marginal health costs (in Euro pr kg) of carbon monoxide and formaldehyde emissions are very small. The emissions of formaldehyde are also small and the resulting costs for this component is therefore very small. The emission of carbon monoxide is much larger, however the small marginal cost makes the contribution to the total costs small, also for this component. The marginal health costs of nitrogen oxides and ethene emissions show little variation with engine scenario. However the general picture is that as the NO{sub x} emissions increase (either by increasing ignition

  9. Socio economic analysis of environmental optimisation of natural gas fired engines

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Sisse Liv; Moeller, F.

    2011-02-15

    This report analyses budget and welfare costs associated with changing settings in a gas engine. The purpose is to analyse what it will cost the plant owner and society if one would change the engine settings in order to obtain lower NO{sub x} emissions. The plant owner will loose while society will gain wealth when aiming for lower NO{sub x} emissions. The loss for the plant owner is primary caused by taxes while the gain for society is caused by less health expenses. The report also analyses if placement have any effect for society; however, since the population density does not differ very much across Denmark this does not have any mayor effect. (Author)

  10. Comparative case study on website traffic generated by search engine optimisation and a pay-per-click campaign, versus marketing expenditure

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger

    2015-09-01

    Full Text Available Background: No empirical work was found on how marketing expenses compare when used solely for either the one or the other of the two main types of search engine marketing. Objectives: This research set out to determine how the results of the implementation of a pay-per-click campaign compared to those of a search engine optimisation campaign, given the same website and environment. At the same time, the expenses incurred on both these marketing methods were recorded and compared. Method: The active website of an existing, successful e-commerce concern was used as platform. The company had been using pay-per-click only for a period, whilst traffic was monitored. This system was decommissioned on a particular date and time, and an alternative search engine optimisation system was started at the same time. Again, both traffic and expenses were monitored. Results: The results indicate that the pay-per-click system did produce favourable results, but on the condition that a monthly fee has to be set aside to guarantee consistent traffic. The implementation of search engine optimisation required a relatively large investment at the outset, but it was once-off. After a drop in traffic owing to crawler visitation delays, the website traffic bypassed the average figure achieved during the pay-per-click period after a little over three months, whilst the expenditure crossed over after just six months. Conclusion: Whilst considering the specific parameters of this study, an investment in search engine optimisation rather than a pay-per-click campaign appears to produce better results at a lower cost, after a given period of time. [PDF to follow

  11. Civil Engineering Optimisation Tool for the Study of CERN's Future Circular Colliders

    OpenAIRE

    Cook, Charlie; Goddard, Brennan; Lebrun, Philippe; Osborne, John; Robert, Youri; Sturzaker, C; Sykes, M; Loo, Y; Brasser, J; Trunk, R

    2015-01-01

    The feasibility of Future Circular Colliders (FCC), possible successors to the Large Hadron Collider (LHC), is currently under investigation at CERN. This paper describes how CERN’s civil engineering team are utilising an interactive tool containing a 3D geological model of the Geneva basin. This tool will be used to investigate the optimal position of the proposed 80km-100km tunnel. The benefits of using digital modelling during the feasibility stage are discussed and some early results of t...

  12. Numerical Optimisation in Non Reacting Conditions of the Injector Geometry for a Continuous Detonation Wave Rocket Engine

    Science.gov (United States)

    Gaillard, T.; Davidenko, D.; Dupoirieux, F.

    2015-06-01

    The paper presents the methodology and the results of a numerical study, which is aimed at the investigation and optimisation of different means of fuel and oxidizer injection adapted to rocket engines operating in the rotating detonation mode. As the simulations are achieved at the local scale of a single injection element, only one periodic pattern of the whole geometry can be calculated so that the travelling detonation waves and the associated chemical reactions can not be taken into account. Here, separate injection of fuel and oxidizer is considered because premixed injection is handicapped by the risk of upstream propagation of the detonation wave. Different associations of geometrical periodicity and symmetry are investigated for the injection elements distributed over the injector head. To analyse the injection and mixing processes, a nonreacting 3D flow is simulated using the LES approach. Performance of the studied configurations is analysed using the results on instantaneous and mean flowfields as well as by comparing the mixing efficiency and the total pressure recovery evaluated for different configurations.

  13. Optimising ventilation-system design for a container-housed engine

    Energy Technology Data Exchange (ETDEWEB)

    Sala, J.M.; Eguia, J.; Flores, I. [Escuela Superior de Ingenieros Industriales de Bilbao, Universidad del Pais Vasco, Alameda de Urquijo, s/n 48013 Bilbao (Bizkaia) (Spain); Lopez-Gonzalez, L.M.; Ruiz de Adana, M. [Escuela Tecnica Superior de Ingenieria Industrial, Depto de Ingenieria Mecanica, Universidad de La Rioja, C/Luis de Ulloa, 20, E-26004 Logrono (La Rioja) (Spain); Miguez, J.L. [Universidad de Vigo, Escuela Tecnica Superior de Ingenieros Industriales, C/Lagoas-Marcosende, s/n 36200 Vigo (Pontevedra) (Spain)

    2006-10-15

    Containerised cogeneration sets, CCSs, are an efficient answer for remote developing regions which do not have alternative energy sources and for those applications requiring mobility and the quick installation of energy plants. Nevertheless, CCSs can present over-heating problems as a result of inefficient ventilation. The heat dissipated by each of the 28 elements under consideration in the engine compartment was assessed, together with the mass flow rate of air supplied to the cab and the air temperature at the inlet and outlet. A Computational Fluid Dynamics (CFD) model has been developed that allows for simulation of the parameters of velocity, temperature and pressure and for calculating the heat flows in a CCS with a reciprocating diesel engine, with an alternator power of 903kW. Predictions from this model have been contrasted with the experimental data obtained in a series of measurements. The CFD model has been used to analyse possible alternatives for improving the ventilation system. Besides the use of insulation to reduce the heat dissipated, other alternatives have been studied: e.g., improving the airflow by fitting a metal sheet as a deflector, or using a third fan. Of the three alternatives analysed, the company has decided to incorporate the simplest and cheapest, consisting in fitting a metal sheet around the alternator. (author)

  14. Selenium fuel: Surface engineering of U(Mo) particles to optimise fuel performance

    International Nuclear Information System (INIS)

    Van den Berghe, S.; Leenaers, A.; Detavernier, C.

    2010-01-01

    Recent developments on the stabilisation of U(Mo) in-pile behaviour in plate-type fuel have focussed almost exclusively on the addition of Si to the Al matrix of the fuel. This has now culminated in a qualification effort in the form of the European LEONIDAS initiative for which irradiations will start in 2010. In this framework, many discussions have been held on the Si content of the matrix needed for stabilisation of the interaction phase and the requirement for the formation of Si-rich layers around the particles during the fabrication steps. However, it is clear that the Si needs to be incorporated in the interaction phase for it to be effective, for which the currently proposed methods depend on a diffusion mechanism, which is difficult to control. This has lead to the concept of a Si coated particle as a more efficient way of incorporating the Si in the fuel by putting it immediately where it will be required : at the fuel-matrix interface. As part of the SELENIUM (Surface Engineered Low ENrIched Uranium-Molybdenum fuel) project, SCK CEN has built a sputter coater for PVD magnetron sputter coating of particles in collaboration with the University of Ghent. The coater is equipped with three 3 inch magnetron sputter heads, allowing deposition of 3 different elements or a single element at high deposition speed. The particles are slowly rotated in a drum to produce homogeneous layer thicknesses. (author)

  15. An Introduction to Parallel Cluster Computing Using PVM for Computer Modeling and Simulation of Engineering Problems

    International Nuclear Information System (INIS)

    Spencer, VN

    2001-01-01

    An investigation has been conducted regarding the ability of clustered personal computers to improve the performance of executing software simulations for solving engineering problems. The power and utility of personal computers continues to grow exponentially through advances in computing capabilities such as newer microprocessors, advances in microchip technologies, electronic packaging, and cost effective gigabyte-size hard drive capacity. Many engineering problems require significant computing power. Therefore, the computation has to be done by high-performance computer systems that cost millions of dollars and need gigabytes of memory to complete the task. Alternately, it is feasible to provide adequate computing in the form of clustered personal computers. This method cuts the cost and size by linking (clustering) personal computers together across a network. Clusters also have the advantage that they can be used as stand-alone computers when they are not operating as a parallel computer. Parallel computing software to exploit clusters is available for computer operating systems like Unix, Windows NT, or Linux. This project concentrates on the use of Windows NT, and the Parallel Virtual Machine (PVM) system to solve an engineering dynamics problem in Fortran

  16. Parallel computing in cluster of GPU applied to a problem of nuclear engineering

    International Nuclear Information System (INIS)

    Moraes, Sergio Ricardo S.; Heimlich, Adino; Resende, Pedro

    2013-01-01

    Cluster computing has been widely used as a low cost alternative for parallel processing in scientific applications. With the use of Message-Passing Interface (MPI) protocol development became even more accessible and widespread in the scientific community. A more recent trend is the use of Graphic Processing Unit (GPU), which is a powerful co-processor able to perform hundreds of instructions in parallel, reaching a capacity of hundreds of times the processing of a CPU. However, a standard PC does not allow, in general, more than two GPUs. Hence, it is proposed in this work development and evaluation of a hybrid low cost parallel approach to the solution to a nuclear engineering typical problem. The idea is to use clusters parallelism technology (MPI) together with GPU programming techniques (CUDA - Compute Unified Device Architecture) to simulate neutron transport through a slab using Monte Carlo method. By using a cluster comprised by four quad-core computers with 2 GPU each, it has been developed programs using MPI and CUDA technologies. Experiments, applying different configurations, from 1 to 8 GPUs has been performed and results were compared with the sequential (non-parallel) version. A speed up of about 2.000 times has been observed when comparing the 8-GPU with the sequential version. Results here presented are discussed and analyzed with the objective of outlining gains and possible limitations of the proposed approach. (author)

  17. Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico; Som, Sibendu

    2017-10-15

    Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy is to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics

  18. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine

    Science.gov (United States)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

  19. Parallel local search for solving Constraint Problems on the Cell Broadband Engine (Preliminary Results

    Directory of Open Access Journals (Sweden)

    Salvator Abreu

    2009-10-01

    Full Text Available We explore the use of the Cell Broadband Engine (Cell/BE for short for combinatorial optimization applications: we present a parallel version of a constraint-based local search algorithm that has been implemented on a multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16 SPUs per blade. This algorithm was chosen because it fits very well the Cell/BE architecture and requires neither shared memory nor communication between processors, while retaining a compact memory footprint. We study the performance on several large optimization benchmarks and show that this achieves mostly linear time speedups, even sometimes super-linear. This is possible because the parallel implementation might explore simultaneously different parts of the search space and therefore converge faster towards the best sub-space and thus towards a solution. Besides getting speedups, the resulting times exhibit a much smaller variance, which benefits applications where a timely reply is critical.

  20. A design concept of parallel elasticity extracted from biological muscles for engineered actuators.

    Science.gov (United States)

    Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie

    2016-08-23

    Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.

  1. Passive and partially active fault tolerance for massively parallel stream processing engines

    DEFF Research Database (Denmark)

    Su, Li; Zhou, Yongluan

    2018-01-01

    . On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...... also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness...

  2. Current Trends in Numerical Simulation for Parallel Engineering Environments New Directions and Work-in-Progress

    International Nuclear Information System (INIS)

    Trinitis, C; Schulz, M

    2006-01-01

    In today's world, the use of parallel programming and architectures is essential for simulating practical problems in engineering and related disciplines. Remarkable progress in CPU architecture, system scalability, and interconnect technology continues to provide new opportunities, as well as new challenges for both system architects and software developers. These trends are paralleled by progress in parallel algorithms, simulation techniques, and software integration from multiple disciplines. ParSim brings together researchers from both application disciplines and computer science and aims at fostering closer cooperation between these fields. Since its successful introduction in 2002, ParSim has established itself as an integral part of the EuroPVM/MPI conference series. In contrast to traditional conferences, emphasis is put on the presentation of up-to-date results with a short turn-around time. This offers a unique opportunity to present new aspects in this dynamic field and discuss them with a wide, interdisciplinary audience. The EuroPVM/MPI conference series, as one of the prime events in parallel computation, serves as an ideal surrounding for ParSim. This combination enables the participants to present and discuss their work within the scope of both the session and the host conference. This year, eleven papers from authors in nine countries were submitted to ParSim, and we selected five of them. They cover a wide range of different application fields including gas flow simulations, thermo-mechanical processes in nuclear waste storage, and cosmological simulations. At the same time, the selected contributions also address the computer science side of their codes and discuss different parallelization strategies, programming models and languages, as well as the use nonblocking collective operations in MPI. We are confident that this provides an attractive program and that ParSim will be an informal setting for lively discussions and for fostering new

  3. Modeling and Control of a Parallel Waste Heat Recovery System for Euro-VI Heavy-Duty Diesel Engines

    NARCIS (Netherlands)

    Feru, E.; Willems, F.P.T.; Jager, B. de; Steinbuch, M.

    2014-01-01

    This paper presents the modeling and control of a waste heat recovery system for a Euro-VI heavy-duty truck engine. The considered waste heat recovery system consists of two parallel evaporators with expander and pumps mechanically coupled to the engine crankshaft. Compared to previous work, the

  4. Modeling and control of a parallel waste heat recovery system for Euro-VI heavy-duty diesel engines

    NARCIS (Netherlands)

    Feru, E.; Willems, F.P.T.; Jager, de A.G.; Steinbuch, M.

    2014-01-01

    This paper presents the modeling and control of a waste heat recovery systemfor a Euro-VI heavy-duty truck engine. The considered waste heat recovery system consists of two parallel evaporators with expander and pumps mechanically coupled to the engine crankshaft. Compared to previous work, the

  5. Simulation optimisation

    International Nuclear Information System (INIS)

    Anon

    2010-01-01

    Over the past decade there has been a significant advance in flotation circuit optimisation through performance benchmarking using metallurgical modelling and steady-state computer simulation. This benchmarking includes traditional measures, such as grade and recovery, as well as new flotation measures, such as ore floatability, bubble surface area flux and froth recovery. To further this optimisation, Outotec has released its HSC Chemistry software with simulation modules. The flotation model developed by the AMIRA P9 Project, of which Outotec is a sponsor, is regarded by industry as the most suitable flotation model to use for circuit optimisation. This model incorporates ore floatability with flotation cell pulp and froth parameters, residence time, entrainment and water recovery. Outotec's HSC Sim enables you to simulate mineral processes in different levels, from comminution circuits with sizes and no composition, through to flotation processes with minerals by size by floatability components, to full processes with true particles with MLA data.

  6. Life-cycle energy optimisation : A proposed methodology for integrating environmental considerations early in the vehicle engineering design process

    OpenAIRE

    O'Reilly, Ciarán J.; Göransson, Peter; Funazaki, Atsushi; Suzuki, Tetsuya; Edlund, Stefan; Gunnarsson, Cecilia; Lundow, Jan-Olov; Cerin, Pontus; Cameron, Christopher J.; Wennhage, Per; Potting, José

    2016-01-01

    To enable the consideration of life cycle environmental impacts in the early stages of vehicle design, a methodology using the proxy of life cycle energy is proposed in this paper. The trade-offs in energy between vehicle production, operational performance and end-of-life are formulated as a mathematical problem, and simultaneously balanced with other transport-related functionalities, and may be optimised. The methodology is illustrated through an example design study, which is deliberately...

  7. Beam position optimisation for IMRT

    International Nuclear Information System (INIS)

    Holloway, L.; Hoban, P.

    2001-01-01

    Full text: The introduction of IMRT has not generally resulted in the use of optimised beam positions because to find the global solution of the problem a time consuming stochastic optimisation method must be used. Although a deterministic method may not achieve the global minimum it should achieve a superior dose distribution compared to no optimisation. This study aimed to develop and test such a method. The beam optimisation method developed relies on an iterative process to achieve the desired number of beams from a large initial number of beams. The number of beams is reduced in a 'weeding-out' process based on the total fluence which each beam delivers. The process is gradual, with only three beams removed each time (following a small number of iterations), ensuring that the reduction in beams does not dramatically affect the fluence maps of those remaining. A comparison was made between the dose distributions achieved when the beams positions were optimised in this fashion and when the beams positions were evenly distributed. The method has been shown to work quite effectively and efficiently. The Figure shows a comparison in dose distribution with optimised and non optimised beam positions for 5 beams. It can be clearly seen that there is an improvement in the dose distribution delivered to the tumour and a reduction in the dose to the critical structure with beam position optimisation. A method for beam position optimisation for use in IMRT optimisations has been developed. This method although not necessarily achieving the global minimum in beam position still achieves quite a dramatic improvement compared with no beam position optimisation and is very efficiently achieved. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  8. Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach

    Science.gov (United States)

    Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.

    2017-10-01

    ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.

  9. Human factors engineering measures taken by nuclear power plant owners/operators for optimisation of the man-machine interface

    International Nuclear Information System (INIS)

    Eisgruber, H.

    1996-01-01

    Both operating results and human factors studies show that man is able to meet the requirements in this working environment. Hence the degree of human reliability required by the design basis of nuclear power plants is ensured. This means: - Nuclear technology for electricity generation is justifiable from the human factors point of view. - The chief opponent is not right in saying that man is not able to cope with the risks and challenges brought about by nuclear technology applications. The human factors concept for optimisation or configuration of the man-machine systems represents an additional endeavor on the part of nuclear power plant operators within the framework of their responsibilities. Human factors analyses meet with good response by the personnel, as analysis results and clarification of causes of accident scenarios contribute to relieve the personnel (exoneration) and find ways for remedial action. (orig./DG) [de

  10. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    Science.gov (United States)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  11. Engineering Intersubband Nonlinearities in GaN/AlGaN Coupled Quantum Wells for Optimised Performance in wide Bandwidth Applications

    National Research Council Canada - National Science Library

    Soref, Richard A; Sun, Gregory; Khurgin, Jacob B

    2005-01-01

    We investigate nonlinear optical properties of coup led GaN/AlGaN quantum wells and show that one can engineer the response time and nonlinear phase shift within wide limits and thus achieve optimized...

  12. Genetic algorithms and artificial neural networks for loading pattern optimisation of advanced gas-cooled reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ziver, A.K. E-mail: a.k.ziver@imperial.ac.uk; Pain, C.C; Carter, J.N.; Oliveira, C.R.E. de; Goddard, A.J.H.; Overton, R.S

    2004-03-01

    A non-generational genetic algorithm (GA) has been developed for fuel management optimisation of Advanced Gas-Cooled Reactors, which are operated by British Energy and produce around 20% of the UK's electricity requirements. An evolutionary search is coded using the genetic operators; namely selection by tournament, two-point crossover, mutation and random assessment of population for multi-cycle loading pattern (LP) optimisation. A detailed description of the chromosomes in the genetic algorithm coded is presented. Artificial Neural Networks (ANNs) have been constructed and trained to accelerate the GA-based search during the optimisation process. The whole package, called GAOPT, is linked to the reactor analysis code PANTHER, which performs fresh fuel loading, burn-up and power shaping calculations for each reactor cycle by imposing station-specific safety and operational constraints. GAOPT has been verified by performing a number of tests, which are applied to the Hinkley Point B and Hartlepool reactors. The test results giving loading pattern (LP) scenarios obtained from single and multi-cycle optimisation calculations applied to realistic reactor states of the Hartlepool and Hinkley Point B reactors are discussed. The results have shown that the GA/ANN algorithms developed can help the fuel engineer to optimise loading patterns in an efficient and more profitable way than currently available for multi-cycle refuelling of AGRs. Research leading to parallel GAs applied to LP optimisation are outlined, which can be adapted to present day LWR fuel management problems.

  13. A corporate ALARA engineering support for all EDF sites. A major improvement: the generic work areas optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Quiot, Alain [EDF, SPT, UTO, Le Central, Bat. 420, BP 129, 93162 Noisy-le-Grand Cedex (France); Lebeau, Jacques [Electricite de France, ALARA Project, Site Cap Ampere, 1, place Pleyel, 93282 Saint Denis Cedex (France)

    2004-07-01

    ALARA studies performed by EDF plants are quite simple and empirical. Most often, feedback experience and common sense, with the help of simple calculations allow reaching useful and efficient decisions. This is particularly the case when the exposure situations are not complex, within a simple environment and with a single source, or one major source. However, in more complex cases this is not enough to guarantee that actual ALARA solutions are implemented. EDF has then decided to use its national corporate engineering as a support for its sites. That engineering support is in charge of using very efficient tools such as PANTHER-RP. The objective of the presentation is to describe the engineering process and tools now available at EDF, to illustrate them with a few case studies and to describe the goals and procedures set up by EDF. (authors)

  14. 14th ACIS/IEEE International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    Studies in Computational Intelligence : Volume 492

    2013-01-01

    This edited book presents scientific results of the 14th ACIS/IEEE International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2013), held in Honolulu, Hawaii, USA on July 1-3, 2013. The aim of this conference was to bring together scientists, engineers, computer users, and students to share their experiences and exchange new ideas, research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the 17 outstanding papers from those papers accepted for presentation at the conference.  

  15. 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    2015-01-01

    This edited book presents scientific results of 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2014) held on June 30 – July 2, 2014 in Las Vegas Nevada, USA. The aim of this conference was to bring together scientists, engineers, computer users, and students to share their experiences and exchange new ideas, research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the 13 outstanding papers from those papers accepted for presentation at the conference.

  16. System analysis and optimisation of a Kalina split-cycle for waste heat recovery on large marine diesel engines

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Nguyen, Tuong-Van; Knudsen, Thomas

    2014-01-01

    Waste heat recovery systems can produce power from heat without using fuel or emitting CO2, therefore their implementation is becoming increasingly relevant. The Kalina cycle is proposed as an efficient process for this purpose. The main reason for its high efficiency is the non-isothermal phase...... change characteristics of the ammonia-water working fluid. The present study investigates a unique type of Kalina process called the Split-cycle, applied to the exhaust heat recovery from large marine engines. In the Split-cycle, the working fluid concentration can be changed during the evaporation...

  17. Design modification and optimisation of the perfusion system of a tri-axial bioreactor for tissue engineering.

    Science.gov (United States)

    Hussein, Husnah; Williams, David J; Liu, Yang

    2015-07-01

    A systematic design of experiments (DOE) approach was used to optimize the perfusion process of a tri-axial bioreactor designed for translational tissue engineering exploiting mechanical stimuli and mechanotransduction. Four controllable design parameters affecting the perfusion process were identified in a cause-effect diagram as potential improvement opportunities. A screening process was used to separate out the factors that have the largest impact from the insignificant ones. DOE was employed to find the settings of the platen design, return tubing configuration and the elevation difference that minimise the load on the pump and variation in the perfusion process and improve the controllability of the perfusion pressures within the prescribed limits. DOE was very effective for gaining increased knowledge of the perfusion process and optimizing the process for improved functionality. It is hypothesized that the optimized perfusion system will result in improved biological performance and consistency.

  18. Optimising generators

    Energy Technology Data Exchange (ETDEWEB)

    Guerra, E.J.; Garcia, A.O.; Graffigna, F.M.; Verdu, C.A. (IMPSA (Argentina). Generators Div.)

    1994-11-01

    A new computer tool, the ARGEN program, has been developed for dimensioning large hydroelectric generators. This results in better designs, and reduces calculation time for engineers. ARGEN performs dimensional tailoring of salient pole synchronous machines in generators, synchronous condensers, and generator-motors. The operation and uses of ARGEN are explained and its advantages are listed in this article. (UK)

  19. Cfd Based Shape Optimization of Ic Engine Optimisation de l'admission et des chambres de combustion des moteurs avec la modélisation 3D

    Directory of Open Access Journals (Sweden)

    Griaznov V.

    2006-12-01

    Full Text Available Intense competition and global regulations in the automotive industry has placed unprecedented demands on the performance, efficiency, and emissions of today's IC engines. The success or failure of a new engine design to meet these often-conflicting requirements is primarily dictated by its capability to provide minimal restriction for the inducted and exhausted flow and by its capability to generate strong large-scale in-cylinder motion. The first criterion is directly linked to power performance of the engine, while the latter has been shown to control the burn rate in IC engines. Enhanced burn rates are favorable to engine efficiency and partial load performance. CFD based numerical simulations have recently made it possible to study the development of such engine flows in great details. However, they offer little guidance for modifying the ports and chamber geometry controlling the flow to meet the desired performance. This paper presents a methodology which combines 3D, steady state CFD techniques with robust numerical optimization tools to design, rather than just evaluate the performance, of IC engine ports and chambers. La forte concurrence et les réglementations dans l'industrie automobile entraînent aujourd'hui une exigence sans précédent de performance, de rendement et d'émissions pour les moteurs à combustion interne. Le succès ou l'échec de la conception d'un nouveau moteur satisfaisant à ces propriétés, souvent contradictoires, est dicté, dans un premier temps, par l'obtention d'une restriction minimale des débits d'admission et d'échappement, ensuite, par la nécessité de générer des écoulements forts de grande amplitude. Le premier critère est directement lié à la performance du moteur, tandis que le second est reconnu comme contrôlant la combustion. Des dégagements de chaleur accélérés améliorent le rendement et les performances à faible charge. La simulation 3D rend possible depuis peu d

  20. Energy balance of the optimised CVT-hybrid-driveline

    Energy Technology Data Exchange (ETDEWEB)

    Hoehn, Bernd-Robert; Pflaum, Hermann; Lechner, Claus [Forschungsstelle fuer Zahnraeder und Getriebebau, Technische Univ. Muenchen, Garching (Germany)

    2009-07-01

    Funded by the DFG (German Research Foundation) and some industry partners like GM Powertrain Europe, ZF, EPCOS the Optimised CVT-Hybrid was developed at Technische Universitaet Muenchen in close collaboration with the industry and is currently under scientific investigation. Designed as a parallel hybrid vehicle the Optimised CVT-Hybrid combines a series-production diesel engine with a small electric motor. The core element of the driveline is a two range continuously variable transmission (i{radical}i-transmission), which is based on a chain variator. By a special shifting process without interruption of traction force the ratio range of the chain variator is used twice; thereby a wide transmission-ratio spread is achieved by low complexity. Thus the transmission provides a large pull-away ratio for the small electric motor and a fuel-efficient overdrive ratio for the ic-engine. Instead of heavy and space-consuming accumulators a small efficient package of double layer capacitors (UltraCaps) is used for electric energy and power storage. The driveline management is done by an optimised vehicle controller. Within the scope of the research project two prototype drivelines were manufactured. One driveline is integrated into an Opel Vectra Caravan and is available for investigations at the roller dynamometer and in the actual road traffic. The second hybrid driveline is assembled at the powertrain test rig of the FZG for detailed analysis of system behaviour and fuel consumption. Based on measurements of standardised driving cycles system behaviour, fuel consumption and a detailed energy balance of the Optimised CVT-Hybrid are presented. In comparison to the series-production vehicle the fuel savings are shown. (orig.)

  1. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei; Som, Sibendu

    2017-10-15

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual engine cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed

  2. Laminar dispersion in parallel plate sections of flowing systems used in analytical chemistry and chemical engineering

    NARCIS (Netherlands)

    Kolev, S.D.; Kolev, Spas D.; van der Linden, W.E.

    1991-01-01

    An exact solution of the convective-diffusion equation for fully developed parallel plate laminar flow was obtained. It allows the derivation of theoretical relationships for calculating the Peclet number in the axially dispersed plug flow model and the concentration distribution perpendicular to

  3. Computing Infrastructure and Remote, Parallel Data Mining Engine for Virtual Observatories, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SciberQuest, Inc. proposes to develop a state-of-the-art data mining engine that extends the functionality of Virtual Observatories (VO) from data portal to science...

  4. Computing Infrastructure and Remote, Parallel Data Mining Engine for Virtual Observatories, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a state-of-the-art data mining engine that extends the functionality of Virtual Observatories (VO) from data portal to science analysis...

  5. 6th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    2016-01-01

    This edited book presents scientific results of the 16th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2015) which was held on June 1 – 3, 2015 in Takamatsu, Japan. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

  6. 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing

    CERN Document Server

    SNPD 2016

    2016-01-01

    This edited book presents scientific results of the 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2016) which was held on May 30 - June 1, 2016 in Shanghai, China. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

  7. Multigrid Implementation of Cellular Automata for Topology Optimisation of Continuum Structures with Design Dependent loads

    NARCIS (Netherlands)

    Zakhama, R.

    2009-01-01

    Topology optimisation of continuum structures has become mature enough to be often applied in industry and continues to attract the attention of researchers and software companies in various engineering fields. Traditionally, most available algorithms for solving topology optimisation problems are

  8. Parallel comparative studies on toxicity of quantum dots synthesized and surface engineered with different methods in vitro and in vivo

    Directory of Open Access Journals (Sweden)

    Liu F

    2017-07-01

    Full Text Available Fengjun Liu1,* Wen Ye1,* Jun Wang2 Fengxiang Song1 Yingsheng Cheng3 Bingbo Zhang21Department of Radiology, Shanghai Public Health Clinical Center, 2Institute of Photomedicine, Shanghai Skin Disease Hospital, The Institute for Biomedical Engineering & Nano Science, Tongji University School of Medicine, 3Department of Radiology, Shanghai Sixth People’s Hospital, Shanghai Jiao Tong University, Shanghai, China *These authors contributed equally to this work Abstract: Quantum dots (QDs have been considered to be promising probes for biosensing, bioimaging, and diagnosis. However, their toxicity issues caused by heavy metals in QDs remain to be addressed, in particular for their in vivo biomedical applications. In this study, a parallel comparative investigation in vitro and in vivo is presented to disclose the impact of synthetic methods and their following surface modifications on the toxicity of QDs. Cellular assays after exposure to QDs were conducted including cell viability assessment, DNA breakage study in a single cellular level, intracellular reactive oxygen species (ROS receptor measurement, and transmission electron microscopy to evaluate their toxicity in vitro. Mice experiments after QD administration, including analysis of hemobiological indices, pharmacokinetics, histological examination, and body weight, were further carried out to evaluate their systematic toxicity in vivo. Results show that QDs fabricated by the thermal decomposition approach in organic phase and encapsulated by an amphiphilic polymer (denoted as QDs-1 present the least toxicity in acute damage, compared with those of QDs surface engineered by glutathione-mediated ligand exchange (denoted as QDs-2, and the ones prepared by coprecipitation approach in aqueous phase with mercaptopropionic acid capped (denoted as QDs-3. With the extension of the investigation time of mice respectively injected with QDs, we found that the damage caused by QDs to the organs can be

  9. Computer Based Optimisation Rutines

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  10. Optimal Optimisation in Chemometrics

    NARCIS (Netherlands)

    Hageman, J.A.

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  11. Spatial-structural interaction and strain energy structural optimisation

    NARCIS (Netherlands)

    Hofmeyer, H.; Davila Delgado, J.M.; Borrmann, A.; Geyer, P.; Rafiq, Y.; Wilde, de P.

    2012-01-01

    A research engine iteratively transforms spatial designs into structural designs and vice versa. Furthermore, spatial and structural designs are optimised. It is suggested to optimise a structural design by evaluating the strain energy of its elements and by then removing, adding, or changing the

  12. ATLAS software configuration and build tool optimisation

    Science.gov (United States)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of

  13. Optimised intake stroke analysis for flat and dome head pistons ...

    African Journals Online (AJOL)

    Optimised intake stroke analysis for flat and dome head pistons. ... in understanding the performance characteristics optioned between flat head and dome head pistons in engine design. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  14. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi; Collier, Nathan; Niemi, Antti; Calo, Victor M.

    2012-01-01

    optimised shapes produce efficient flapping flights, the wake pattern and its vorticity strength are examined. This work described in this paper should facilitate better guidance for shape design of engineered flying systems.

  15. Optimised Renormalisation Group Flows

    CERN Document Server

    Litim, Daniel F

    2001-01-01

    Exact renormalisation group (ERG) flows interpolate between a microscopic or classical theory and the corresponding macroscopic or quantum effective theory. For most problems of physical interest, the efficiency of the ERG is constrained due to unavoidable approximations. Approximate solutions of ERG flows depend spuriously on the regularisation scheme which is determined by a regulator function. This is similar to the spurious dependence on the ultraviolet regularisation known from perturbative QCD. Providing a good control over approximated ERG flows is at the root for reliable physical predictions. We explain why the convergence of approximate solutions towards the physical theory is optimised by appropriate choices of the regulator. We study specific optimised regulators for bosonic and fermionic fields and compare the optimised ERG flows with generic ones. This is done up to second order in the derivative expansion at both vanishing and non-vanishing temperature. An optimised flow for a ``proper-time ren...

  16. Optimising Magnetostatic Assemblies

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Smith, Anders

    theorem. This theorem formulates an energy equivalence principle with several implications concerning the optimisation of objective functionals that are linear with respect to the magnetic field. Linear functionals represent different optimisation goals, e.g. maximising a certain component of the field...... approached employing a heuristic algorithm, which led to new design concepts. Some of the procedures developed for linear objective functionals have been extended to non-linear objectives, by employing iterative techniques. Even though most the optimality results discussed in this work have been derived...

  17. Modified cuckoo search: A new gradient free optimisation algorithm

    International Nuclear Information System (INIS)

    Walton, S.; Hassan, O.; Morgan, K.; Brown, M.R.

    2011-01-01

    Highlights: → Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. → MCS shows a high convergence rate, able to outperform other optimisers. → MCS is particularly strong at high dimension objective functions. → MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  18. Acoustic Resonator Optimisation for Airborne Particle Manipulation

    Science.gov (United States)

    Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian

    Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.

  19. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    International Nuclear Information System (INIS)

    Hirata, So

    2003-01-01

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory[MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ)

  20. Intelligent Support for a Computer Aided Design Optimisation Cycle

    OpenAIRE

    B. Dolšak; M. Novak; J. Kaljun

    2006-01-01

    It is becoming more and more evident that  adding intelligence  to existing computer aids, such as computer aided design systems, can lead to significant improvements in the effective and reliable performance of various engineering tasks, including design optimisation. This paper presents three different intelligent modules to be applied within a computer aided design optimisation cycle to enable more intelligent and less experience-dependent design performance. 

  1. A supportive architecture for CFD-based design optimisation

    Science.gov (United States)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  2. Optimisation of radiation protection

    International Nuclear Information System (INIS)

    1988-01-01

    Optimisation of radiation protection is one of the key elements in the current radiation protection philosophy. The present system of dose limitation was issued in 1977 by the International Commission on Radiological Protection (ICRP) and includes, in addition to the requirements of justification of practices and limitation of individual doses, the requirement that all exposures be kept as low as is reasonably achievable, taking social and economic factors into account. This last principle is usually referred to as optimisation of radiation protection, or the ALARA principle. The NEA Committee on Radiation Protection and Public Health (CRPPH) organised an ad hoc meeting, in liaison with the NEA committees on the safety of nuclear installations and radioactive waste management. Separate abstracts were prepared for individual papers presented at the meeting

  3. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  4. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Andreasen, Casper Schousboe; Aage, Niels

    stabilised finite elements implemented in a parallel multiphysics analysis and optimisation framework DFEM [1], developed and maintained in house. Focus is put on control of the temperature field within the solid structure and the problems can therefore be seen as conjugate heat transfer problems, where heat...... conduction governs in the solid parts of the design domain and couples to convection-dominated heat transfer to a surrounding fluid. Both loosely coupled and tightly coupled problems are considered. The loosely coupled problems are convection-diffusion problems, based on an advective velocity field from...

  5. Advanced optimisation - coal fired power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Turney, D.M.; Mayes, I. [E.ON UK, Nottingham (United Kingdom)

    2005-03-01

    The purpose of this unit optimization project is to develop an integrated approach to unit optimisation and develop an overall optimiser that is able to resolve any conflicts between the individual optimisers. The individual optimisers have been considered during this project are: on-line thermal efficiency package, GNOCIS boiler optimiser, GNOCIS steam side optimiser, ESP optimisation, and intelligent sootblowing system. 6 refs., 7 figs., 3 tabs.

  6. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  7. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    International Nuclear Information System (INIS)

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  8. Optimisation of monochrome images

    International Nuclear Information System (INIS)

    Potter, R.

    1983-01-01

    Gamma cameras with modern imaging systems usually digitize the signals to allow storage and processing of the image in a computer. Although such computer systems are widely used for the extraction of quantitative uptake estimates and the analysis of time variant data, the vast majority of nuclear medicine images is still interpreted on the basis of an observer's visual assessment of a photographic hardcopy image. The optimisation of hardcopy devices is therefore vital and factors such as resolution, uniformity, noise grey scales and display matrices are discussed. Once optimum display parameters have been determined, routine procedures for quality control need to be established; suitable procedures are discussed. (U.K.)

  9. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Science.gov (United States)

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  10. Internal combustion engine control for series hybrid electric vehicles by parallel and distributed genetic programming/multiobjective genetic algorithms

    Science.gov (United States)

    Gladwin, D.; Stewart, P.; Stewart, J.

    2011-02-01

    This article addresses the problem of maintaining a stable rectified DC output from the three-phase AC generator in a series-hybrid vehicle powertrain. The series-hybrid prime power source generally comprises an internal combustion (IC) engine driving a three-phase permanent magnet generator whose output is rectified to DC. A recent development has been to control the engine/generator combination by an electronically actuated throttle. This system can be represented as a nonlinear system with significant time delay. Previously, voltage control of the generator output has been achieved by model predictive methods such as the Smith Predictor. These methods rely on the incorporation of an accurate system model and time delay into the control algorithm, with a consequent increase in computational complexity in the real-time controller, and as a necessity relies to some extent on the accuracy of the models. Two complementary performance objectives exist for the control system. Firstly, to maintain the IC engine at its optimal operating point, and secondly, to supply a stable DC supply to the traction drive inverters. Achievement of these goals minimises the transient energy storage requirements at the DC link, with a consequent reduction in both weight and cost. These objectives imply constant velocity operation of the IC engine under external load disturbances and changes in both operating conditions and vehicle speed set-points. In order to achieve these objectives, and reduce the complexity of implementation, in this article a controller is designed by the use of Genetic Programming methods in the Simulink modelling environment, with the aim of obtaining a relatively simple controller for the time-delay system which does not rely on the implementation of real time system models or time delay approximations in the controller. A methodology is presented to utilise the miriad of existing control blocks in the Simulink libraries to automatically evolve optimal control

  11. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine, NASA Advanced Air Vehicles Program - Commercial Supersonic Technology Project - AeroServoElasticity

    Science.gov (United States)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  12. Optimisation of occupational exposure

    International Nuclear Information System (INIS)

    Webb, G.A.M.; Fleishman, A.B.

    1982-01-01

    The general concept of the optimisation of protection of the public is briefly described. Some ideas being developed for extending the cost benefit framework to include radiation workers with full implementation of the ALARA criterion are described. The role of cost benefit analysis in radiological protection and the valuation of health detriment including the derivation of monetary values and practical implications are discussed. Cost benefit analysis can lay out for inspection the doses, the associated health detriment costs and the costs of protection for alternative courses of action. However it is emphasised that the cost benefit process is an input to decisions on what is 'as low as reasonably achievable' and not a prescription for making them. (U.K.)

  13. Standardised approach to optimisation

    International Nuclear Information System (INIS)

    Warren-Forward, Helen M.; Beckhaus, Ronald

    2004-01-01

    Optimisation of radiographic images is said to have been obtained if the patient has achieved an acceptable level of dose and the image is of diagnostic value. In the near future, it will probably be recommended that radiographers measure patient doses and compare them to reference levels. The aim of this paper is to describe a standardised approach to optimisation of radiographic examinations in a diagnostic imaging department. A three-step approach is outlined with specific examples for some common examinations (chest, abdomen, pelvis and lumbar spine series). Step One: Patient doses are calculated. Step Two: Doses are compared to existing reference levels and the technique used compared to image quality criteria. Step Three: Appropriate action is taken if doses are above the reference level. Results: Average entrance surface doses for two rooms were as follows AP Abdomen (6.3mGy and 3.4mGy); AP Lumbar Spine (6.4mGy and 4.1mGy) for AP Pelvis (4.8mGy and 2.6mGy) and PA chest (0.19mGy and 0.20mGy). Comparison with the Commission of the European Communities (CEC) recommended techniques identified large differences in the applied potential. The kVp values in this study were significantly lower (by up to lOkVp) than the CEC recommendations. The results of this study have indicated that there is a need to monitor radiation doses received by patients undergoing diagnostic radiography examinations. Not only has the assessment allowed valuable comparison with International Diagnostic Reference Levels and Radiography Good Practice but has demonstrated large variations in mean doses being delivered from different rooms of the same radiology department. Following the simple 3-step approach advocated in this paper should either provide evidence that department are practising the ALARA principle or assist in making suitable changes to current practice. Copyright (2004) Australian Institute of Radiography

  14. An optimised portfolio management model, incorporating best practices

    OpenAIRE

    2015-01-01

    M.Ing. (Engineering Management) Driving sustainability, optimising return on investments and cultivating a competitive market advantage, are imperative for organisational success and growth. In order to achieve the business objectives and value proposition, effective management strategies must be efficiently implemented, monitored and controlled. Failure to do so ultimately result in; financial loss due to increased capital and operational expenditure, schedule slippages, substandard deliv...

  15. Topology optimised wavelength dependent splitters

    DEFF Research Database (Denmark)

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  16. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  17. Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  18. Turbulence optimisation in stellarator experiments

    Energy Technology Data Exchange (ETDEWEB)

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  19. Optimisation of load control

    International Nuclear Information System (INIS)

    Koponen, P.

    1998-01-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  20. Optimisation of load control

    Energy Technology Data Exchange (ETDEWEB)

    Koponen, P [VTT Energy, Espoo (Finland)

    1998-08-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  1. SPS batch spacing optimisation

    CERN Document Server

    Velotti, F M; Carlier, E; Goddard, B; Kain, V; Kotzian, G

    2017-01-01

    Until 2015, the LHC filling schemes used the batch spac-ing as specified in the LHC design report. The maximumnumber of bunches injectable in the LHC directly dependson the batch spacing at injection in the SPS and hence onthe MKP rise time.As part of the LHC Injectors Upgrade project for LHCheavy ions, a reduction of the batch spacing is needed. In thisdirection, studies to approach the MKP design rise time of150ns(2-98%) have been carried out. These measurementsgave clear indications that such optimisation, and beyond,could be done also for higher injection momentum beams,where the additional slower MKP (MKP-L) is needed.After the successful results from 2015 SPS batch spacingoptimisation for the Pb-Pb run [1], the same concept wasthought to be used also for proton beams. In fact, thanksto the SPS transverse feed back, it was already observedthat lower batch spacing than the design one (225ns) couldbe achieved. For the 2016 p-Pb run, a batch spacing of200nsfor the proton beam with100nsbunch spacing wasreque...

  2. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  3. Topology optimisation of passive coolers for light-emitting diode lamps

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    2015-01-01

    This work applies topology optimisation to the design of passive coolers for light-emitting diode (LED) lamps. The heat sinks are cooled by the natural convection currents arising from the temperature difference between the LED lamp and the surrounding air. A large scale parallel computational....... The optimisation results show interesting features that are currently being incorporated into industrial designs for enhanced passive cooling abilities....

  4. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  5. Vaccine strategies: Optimising outcomes.

    Science.gov (United States)

    Hardt, Karin; Bonanni, Paolo; King, Susan; Santos, Jose Ignacio; El-Hodhod, Mostafa; Zimet, Gregory D; Preiss, Scott

    2016-12-20

    factors that encourage success, which often include strong support from government and healthcare organisations, as well as tailored, culturally-appropriate local approaches to optimise outcomes. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Analytical Solutions and Optimization of the Exo-Irreversible Schmidt Cycle with Imperfect Regeneration for the 3 Classical Types of Stirling Engine Solutions analytiques et optimisation du cycle de Schmidt irréversible à régénération imparfaite appliquées aux 3 types classiques de moteur Stirling

    Directory of Open Access Journals (Sweden)

    Rochelle P.

    2011-11-01

    Full Text Available The “old” Stirling engine is one of the most promising multi-heat source engines for the future. Simple and realistic basic models are useful to aid in optimizing a preliminary engine configuration. In addition to new proper analytical solutions for regeneration that dramatically reduce computing time, this study of the Schmidt-Stirling engine cycle is carried out from an engineer-friendly viewpoint introducing exo-irreversible heat transfers. The reference parameters are the technological or physical constraints: the maximum pressure, the maximum volume, the extreme wall temperatures and the overall thermal conductance, while the adjustable optimization variables are the volumetric compression ratio, the dead volume ratios, the volume phase-lag, the gas characteristics, the hot-to-cold conductance ratio and the regenerator efficiency. The new normalized analytical expressions for the operating characteristics of the engine: power, work, efficiency, mean pressure, maximum speed of revolution are derived, and some dimensionless and dimensional reference numbers are presented as well as power optimization examples with respect to non-dimensional speed, volume ratio and volume phase-lag angle.analytical solutions. Le “vieux” moteur Stirling est l’un des moteurs a sources multiples d’energie les plus prometteurs pour le futur. Des modeles elementaires simples et realistes sont utiles pour faciliter l’optimisation de configurations preliminaires du moteur. En plus de nouvelles solutions analytiques qui reduisent fortement le temps de calcul, cette etude du cycle moteur de Schmidt-Stirling modifie est entreprise avec le point de vue de l’ingenieur en introduisant les exo-irreversibilites dues aux transferts thermiques. Les parametres de reference sont des contraintes technologiques ou physiques : la pression maximum, le volume maximum, les temperatures de paroi extremes et la conductance totale, alors que les parametres d’optimisation

  7. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  8. Multi-Optimisation Consensus Clustering

    Science.gov (United States)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  9. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  10. Isogeometric Analysis and Shape Optimisation

    DEFF Research Database (Denmark)

    Gravesen, Jens; Evgrafov, Anton; Gersborg, Allan Roulund

    of the whole domain. So in every optimisation cycle we need to extend a parametrisation of the boundary of a domain to the whole domain. It has to be fast in order not to slow the optimisation down but it also has to be robust and give a parametrisation of high quality. These are conflicting requirements so we...... will explain how the validity of a parametrisation can be checked and we will describe various ways to parametrise a domain. We will in particular study the Winslow functional which turns out to have some desirable properties. Other problems we touch upon is clustering of boundary control points (design...

  11. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  12. Design optimisation of a flywheel hybrid vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Kok, D.B.

    1999-11-04

    This thesis describes the design optimisation of a flywheel hybrid vehicle with respect to fuel consumption and exhaust gas emissions. The driveline of this passenger car uses two power sources: a small spark ignition internal combustion engine with three-way catalyst, and a highspeed flywheel system for kinetic energy storage. A custom-made continuously variable transmission (CVT) with so-called i{sup 2} control transports energy between these power sources and the vehicle wheels. The driveline includes auxiliary systems for hydraulic, vacuum and electric purposes. In this fully mechanical driveline, parasitic energy losses determine the vehicle's fuel saving potential to a large extent. Practicable energy loss models have been derived to quantify friction losses in bearings, gearwheels, the CVT, clutches and dynamic seals. In addition, the aerodynamic drag in the flywheel system and power consumption of auxiliaries are charted. With the energy loss models available, a calculation procedure is introduced to optimise the flywheel as a subsystem in which the rotor geometry, the safety containment, and the vacuum system are designed for minimum energy use within the context of automotive applications. A first prototype of the flywheel system was tested experimentally and subsequently redesigned to improve rotordynamics and safety aspects. Coast-down experiments with the improved version show that the energy losses have been lowered significantly. The use of a kinetic energy storage device enables the uncoupling of vehicle wheel power and engine power. Therefore, the engine can be smaller and it can be chosen to operate in its region of best efficiency in start-stop mode. On a test-rig, the measured engine fuel consumption was reduced with more than 30 percent when the engine is intermittently restarted with the aid of the flywheel system. Although the start-stop mode proves to be advantageous for fuel consumption, exhaust gas emissions increase temporarily

  13. Cogeneration technologies, optimisation and implementation

    CERN Document Server

    Frangopoulos, Christos A

    2017-01-01

    Cogeneration refers to the use of a power station to deliver two or more useful forms of energy, for example, to generate electricity and heat at the same time. This book provides an integrated treatment of cogeneration, including a tour of the available technologies and their features, and how these systems can be analysed and optimised.

  14. For Time-Continuous Optimisation

    DEFF Research Database (Denmark)

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  15. Optimisation in X-ray and Molecular Imaging 2015

    International Nuclear Information System (INIS)

    Baath, Magnus; Hoeschen, Christoph; Mattsson, Soeren; Mansson, Lars Gunnar

    2016-01-01

    This issue of Radiation Protection Dosimetry is based on contributions to Optimisation in X-ray and Molecular Imaging 2015 - the 4. Malmoe Conference on Medical Imaging (OXMI 2015). The conference was jointly organised by members of former and current research projects supported by the European Commission EURATOM Radiation Protection Research Programme, in cooperation with the Swedish Society for Radiation Physics. The conference brought together over 150 researchers and other professionals from hospitals, universities and industries with interests in different aspects of the optimisation of medical imaging. More than 100 presentations were given at this international gathering of medical physicists, radiologists, engineers, technicians, nurses and educational researchers. Additionally, invited talks were offered by world-renowned experts on radiation protection, spectral imaging and medical image perception, thus covering several important aspects of the generation and interpretation of medical images. The conference consisted of 13 oral sessions and a poster session, as reflected by the conference title connected by their focus on the optimisation of the use ionising radiation in medical imaging. The conference included technology-specific topics such as computed tomography and tomosynthesis, but also generic issues of interest for the optimisation of all medical imaging, such as image perception and quality assurance. Radiation protection was covered by e.g. sessions on patient dose benchmarking and occupational exposure. Technically-advanced topics such as modelling, Monte Carlo simulation, reconstruction, classification, and segmentation were seen taking advantage of recent developments of hardware and software, showing that the optimisation community is at the forefront of technology and adapts well to new requirements. These peer-reviewed proceedings, representing a continuation of a series of selected reports from meetings in the field of medical imaging

  16. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  17. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  18. Optimising Comprehensibility in Interlingual Translation

    DEFF Research Database (Denmark)

    Nisbeth Jensen, Matilde

    2015-01-01

    The increasing demand for citizen engagement in areas traditionally belonging exclusively to experts, such as health, law and technology has given rise to the necessity of making expert knowledge available to the general public through genres such as instruction manuals for consumer goods, patien...... the functional text type of Patient Information Leaflet. Finally, the usefulness of applying the principles of Plain Language and intralingual translation for optimising comprehensibility in interlingual translation is discussed....

  19. TEM turbulence optimisation in stellarators

    Science.gov (United States)

    Proll, J. H. E.; Mynick, H. E.; Xanthopoulos, P.; Lazerson, S. A.; Faber, B. J.

    2016-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is addressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X (Beidler et al 1990 Fusion Technol. 17 148) and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT (Spong et al 2001 Nucl. Fusion 41 711) code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stellarator experiment HSX (Anderson et al 1995 Fusion Technol. 27 273) is presented for which a reduction of the linear growth rates is achieved over a broad range of the operational parameter space. As an important consequence of this property, the turbulent heat flux levels are reduced compared with the initial configuration.

  20. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  1. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  2. Particle swarm optimisation classical and quantum perspectives

    CERN Document Server

    Sun, Jun; Wu, Xiao-Jun

    2016-01-01

    IntroductionOptimisation Problems and Optimisation MethodsRandom Search TechniquesMetaheuristic MethodsSwarm IntelligenceParticle Swarm OptimisationOverviewMotivationsPSO Algorithm: Basic Concepts and the ProcedureParadigm: How to Use PSO to Solve Optimisation ProblemsSome Harder Examples Some Variants of Particle Swarm Optimisation Why Does the PSO Algorithm Need to Be Improved? Inertia and Constriction-Acceleration Techniques for PSOLocal Best ModelProbabilistic AlgorithmsOther Variants of PSO Quantum-Behaved Particle Swarm Optimisation OverviewMotivation: From Classical Dynamics to Quantum MechanicsQuantum Model: Fundamentals of QPSOQPSO AlgorithmSome Essential ApplicationsSome Variants of QPSOSummary Advanced Topics Behaviour Analysis of Individual ParticlesConvergence Analysis of the AlgorithmTime Complexity and Rate of ConvergenceParameter Selection and PerformanceSummaryIndustrial Applications Inverse Problems for Partial Differential EquationsInverse Problems for Non-Linear Dynamical SystemsOptimal De...

  3. An Optimisation Approach for Room Acoustics Design

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  4. Optimisation of technical specifications using probabilistic methods

    International Nuclear Information System (INIS)

    Ericsson, G.; Knochenhauer, M.; Hultqvist, G.

    1986-01-01

    During the last few years the development of methods for modifying and optimising nuclear power plant Technical Specifications (TS) for plant operations has received increased attention. Probalistic methods in general, and the plant and system models of probabilistic safety assessment (PSA) in particular, seem to provide the most forceful tools for optimisation. This paper first gives some general comments on optimisation, identifying important parameters and then gives a description of recent Swedish experiences from the use of nuclear power plant PSA models and results for TS optimisation

  5. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  6. Optimisation of the LHCb detector

    CERN Document Server

    Hierck, R H

    2003-01-01

    This thesis describes a comparison of the LHCb classic and LHCb light concept from a tracking perspective. The comparison includes the detector occupancies, the various pattern recognition algorithms and the reconstruction performance. The final optimised LHCb setup is used to study the physics performance of LHCb for the Bs->DsK and Bs->DsPi decay channels. This includes both the event selection and a study of the sensitivity for the Bs oscillation frequency, delta m_s, the Bs lifetime difference, DGamma_s, and the CP parameter gamma-2delta gamma.

  7. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  8. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  9. Modelling of a Spark Ignition Engine for Power-Heat Production Optimization Modèle de moteur à allumage commandé en vue de l’optimisation de la production chaleur-force

    Directory of Open Access Journals (Sweden)

    Descieux D.

    2011-09-01

    Full Text Available Spark ignition gas engine is more and more used in order to produce electricity and heat simultaneously. The engine crankshaft drives a synchronous electric generator. The thermal power output is recovered from the engine coolant system and exhaust gas, and is used to produce generally hot water for heating system. In order to have a better adequacy between supply (production of the engine and user demand, good knowledge of the engine and implemented phenomena are necessary. A generic methodology is proposed to simulate the stationary state response of a SI engine. The engine simulation is based on a one zone thermodynamic model, which characterizes each phase of the engine cycle to predict energy performances: exergy efficiency as high as 0.70 is attainable. Le moteur a allumage commande alimente en gaz est un moteur de plus en plus utilise pour la production simultanee d’electricite et de chaleur. Classiquement le moteur entraine sur l’arbre une generatrice electrique. Le flux thermique est recupere principalement sur le systeme de refroidissement du moteur ainsi que sur les fumees chaudes et il est generalement utilise pour produire de la chaleur pour les systemes de chauffage. Pour avoir une meilleure adaptation entre la production du moteur et la demande de l’usager, une bonne connaissance des evolutions dans le moteur et des phenomenes correspondants est necessaire. Une methode thermodynamique generale est proposee pour simulation du fonctionnement dynamique stationnaire d’un MACI. Le modele utilise une analyse monozone et les caracteristiques de chaque transformation du cycle pour etudier les performances energetiques : rendement exergetique de l’ordre de 0,70.

  10. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  11. Optimising resource management in neurorehabilitation.

    Science.gov (United States)

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  12. The reduction of CO{sub 2} emissions from a turbocharged DI gasoline engine through optimised cooling system control; CO{sub 2}-Minderung bei einem Turbo-DI-Ottomotor durch optimiertes Thermomanagement

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, S.; Mueller, R.; Feldhaus, G. [Behr GmbH, Stuttgart (Germany); Finkeldei, T. [BHTC GmbH, Lippstadt (Germany); Neubauer, M. [AVL List GmbH, Graz (Austria)

    2008-01-15

    In a joint project Behr, Behr-Hella Thermocontrol (BHTC) and AVL List have investigated various thermomanagement technologies in order to reduce the CO{sub 2} emissions of a turbocharged direct injection gasoline engine. Through the use of cooled EGR the fuel consumption at part load was reduced by up to 5%; at full load the consumption was reduced by up to 18% since no enrichment was needed. Under real driving conditions a saving of 6% was achieved. A further reduction of about 3% in the NEDC was possible via coolant stand still during the engine warm-up. Additionally, it was shown that a change in the engine coolant temperature of 10 K, made possible by the application of a map controlled thermostat, has the potential for savings of up to 1.4%. (orig.)

  13. A combined system comprising a biomass gasifier and a Stirling engine. Design and optimisation for continuous operation; Eine Anlagenkombination aus Biomassevergaser und Stirlingmotor. Anlagendesign und Auslegung fuer den Dauerbetrieb

    Energy Technology Data Exchange (ETDEWEB)

    Huelscher, Manfred [Qalovis Farmer Automatic Energy GmbH, Laer (Germany)

    2010-07-01

    Conventional wood gasifiers consist of a gasifier, gas filter, and internal combustion engine. The contribution presents a novel system comprising a gasifier, burner, and Stirling engine. To enhance the electric efficiency, the burner is operated with air preheated via reculperation. The Stirling characteristic is known, and the gasification/combustion system can be calculated and designed on the basis of the Stirling data. The dust problem of the Stirling heat exchanger is solved by an automatic filter system, so that low-maintenance long-term operation becomes possible.

  14. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  15. Evolutionary programming for neutron instrument optimisation

    International Nuclear Information System (INIS)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  16. Optimizing the efficiency of a diesel engine for a hybrid wind-diesel experimental validation; Optimisation de l'efficacite du moteur diesel pour un systeme hybride eolien-diesel-validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, H.; Dimitrova, M. [TechnoCentre Eolien, Murdochville, PQ (Canada); Ilinca, A. [Quebec Univ., Rimouski, PQ (Canada); Perron, J. [Quebec Univ., Chicoutimi, PQ (Canada)

    2010-07-01

    This study examined the feasibility of using a wind-diesel compressed air storage system in large-scale gas turbines at remote sites where a good wind resource is available. Studies have shown that the system can increase the wind energy penetration rate, particularly when combined with a turbo diesel engine. The system increases the power and performance of the diesel engine and reduces fuel consumption and emissions of greenhouse gases greenhouse gases (GHG). This study included a comparison of different technical solutions for the compressed air energy storage system, and described the one that optimized the performance and cost of the overall system. The optimal solution allowed the turbocharger to operate independently of the engine due to the energy provided by the compressed air in the air turbine. Optimization required maximizing the compressor power as an objective function. The energy balance of the engine itself had to be taken into account, along with the turbo charging system. 12 refs., 2 tabs., 16 figs.

  17. Dose optimisation in computed radiography

    International Nuclear Information System (INIS)

    Schreiner-Karoussou, A.

    2005-01-01

    After the installation of computed radiography (CR) systems in three hospitals in Luxembourg a patient dose survey was carried out for three radiographic examinations, thorax, pelvis and lumbar spine. It was found that the patient doses had changed in comparison with the patient doses measured for conventional radiography in the same three hospitals. A close collaboration between the manufacturers of the X-ray installations, the CR imaging systems and the medical physicists led to the discovery that the speed class with which each radiographic examination was to be performed, had been ignored, during installation of the digital imaging systems. A number of procedures were carried out in order to calibrate and program the X-ray installations in conjunction with the CR systems. Following this optimisation procedure, a new patient dose survey was carried out for the three radiographic examinations. It was found that patient doses for the three hospitals were reduced. (authors)

  18. Optimising costs in WLCG operations

    CERN Document Server

    Pradillo, Mar; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-01-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastruc...

  19. Distributed parallel messaging for multiprocessor systems

    Science.gov (United States)

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  20. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  1. Optimum layout of engine thermal management; Optimale Auslegung des Motor-Thermomanagements

    Energy Technology Data Exchange (ETDEWEB)

    Beykirch, Ruediger; Knauf, Juergen; Lehmann, Joerg [FEV GmbH, Aachen (Germany). Simulation Ottomotoren; Beulshausen, Johannes [RWTH Aachen Univ. (Germany). Lehrstuhl fuer Verbrennungskraftmaschinen

    2013-05-01

    Optimising an engine's thermal management on the basis of different driving cycles and vehicle and engine tests is both time-consuming and costly. FEV GmbH, in cooperation with the Institute for Combustion Engines at RWTH Aachen University, has developed a holistic simulation model that enables the thermal management of an individual engine to be optimised.

  2. Parallel implementation of DNA sequences matching algorithms using PWM on GPU architecture.

    Science.gov (United States)

    Sharma, Rahul; Gupta, Nitin; Narang, Vipin; Mittal, Ankush

    2011-01-01

    Positional Weight Matrices (PWMs) are widely used in representation and detection of Transcription Factor Of Binding Sites (TFBSs) on DNA. We implement online PWM search algorithm over parallel architecture. A large PWM data can be processed on Graphic Processing Unit (GPU) systems in parallel which can help in matching sequences at a faster rate. Our method employs extensive usage of highly multithreaded architecture and shared memory of multi-cored GPU. An efficient use of shared memory is required to optimise parallel reduction in CUDA. Our optimised method has a speedup of 230-280x over linear implementation on GPU named GeForce GTX 280.

  3. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  4. Reduction environmental effects of civil aircraft through multi-objective flight plan optimisation

    International Nuclear Information System (INIS)

    Lee, D S; Gonzalez, L F; Walker, R; Periaux, J; Onate, E

    2010-01-01

    With rising environmental alarm, the reduction of critical aircraft emissions including carbon dioxides (CO 2 ) and nitrogen oxides (NO x ) is one of most important aeronautical problems. There can be many possible attempts to solve such problem by designing new wing/aircraft shape, new efficient engine, etc. The paper rather provides a set of acceptable flight plans as a first step besides replacing current aircrafts. The paper investigates a green aircraft design optimisation in terms of aircraft range, mission fuel weight (CO 2 ) and NO x using advanced Evolutionary Algorithms coupled to flight optimisation system software. Two multi-objective design optimisations are conducted to find the best set of flight plans for current aircrafts considering discretised altitude and Mach numbers without designing aircraft shape and engine types. The objectives of first optimisation are to maximise range of aircraft while minimising NO x with constant mission fuel weight. The second optimisation considers minimisation of mission fuel weight and NO x with fixed aircraft range. Numerical results show that the method is able to capture a set of useful trade-offs that reduce NO x and CO 2 (minimum mission fuel weight).

  5. Power supply of Eurotunnel. Optimisation based on traffic and simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Marie, Stephane [SNCF, Direction de l' Ingenierie, Saint-Denis (France). Dept. des Installations Fixes de Traction Electrique; Dupont, Jean-Pierre; Findinier, Bertrand; Maquaire, Christian [Eurotunnel, Coquelles (France)

    2010-12-15

    In order to reduce electrical power costs and also to cope with the significant traffic increase, a new study was carried on feeding the tunnel section from the French power station, thus improving and reinforcing the existing network. Based on a design study established by SNCF engineering department, EUROTUNNEL chose a new electrical scheme to cope with the traffic increase and optimise investments. (orig.)

  6. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  7. Combining simulation and multi-objective optimisation for equipment quantity optimisation in container terminals

    OpenAIRE

    Lin, Zhougeng

    2013-01-01

    This thesis proposes a combination framework to integrate simulation and multi-objective optimisation (MOO) for container terminal equipment optimisation. It addresses how the strengths of simulation and multi-objective optimisation can be integrated to find high quality solutions for multiple objectives with low computational cost. Three structures for the combination framework are proposed respectively: pre-MOO structure, integrated MOO structure and post-MOO structure. The applications of ...

  8. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  9. Layout Optimisation of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Pau Mercadé Ruiz

    2017-08-01

    Full Text Available This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA, a genetic algorithm (GA and the glowworm swarm optimisation (GSO algorithm. The results show slightly higher performances for the latter two algorithms; however, the first turns out to be significantly less computationally demanding.

  10. Topology optimisation of natural convection problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations...... coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences...... in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach...

  11. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  12. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  13. Credit price optimisation within retail banking

    African Journals Online (AJOL)

    2014-02-14

    Feb 14, 2014 ... cost based pricing, where the price of a product or service is based on the .... function obtained from fitting a logistic regression model .... Note that the proposed optimisation approach below will allow us to also incorporate.

  14. User perspectives in public transport timetable optimisation

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    The present paper deals with timetable optimisation from the perspective of minimising the waiting time experienced by passengers when transferring either to or from a bus. Due to its inherent complexity, this bi-level minimisation problem is extremely difficult to solve mathematically, since tim...... on the large-scale public transport network in Denmark. The timetable optimisation approach yielded a yearly reduction in weighted waiting time equivalent to approximately 45 million Danish kroner (9 million USD)....

  15. Methodological principles for optimising functional MRI experiments

    International Nuclear Information System (INIS)

    Wuestenberg, T.; Giesel, F.L.; Strasburger, H.

    2005-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most common methods for localising neuronal activity in the brain. Even though the sensitivity of fMRI is comparatively low, the optimisation of certain experimental parameters allows obtaining reliable results. In this article, approaches for optimising the experimental design, imaging parameters and analytic strategies will be discussed. Clinical neuroscientists and interested physicians will receive practical rules of thumb for improving the efficiency of brain imaging experiments. (orig.) [de

  16. Optimisation: how to develop stake holder involvement

    International Nuclear Information System (INIS)

    Weiss, W.

    2003-01-01

    The Precautionary Principle is an internationally recognised approach for dealing with risk situations characterised by uncertainties and potential irreversible damages. Since the late fifties, ICRP has adopted this prudent attitude because of the lack of scientific evidence concerning the existence of a threshold at low doses for stochastic effects. The 'linear, no-threshold' model and the 'optimisation of protection' principle have been developed as a pragmatic response for the management of the risk. The progress in epidemiology and radiobiology over the last decades have affirmed the initial assumption and the optimisation remains the appropriate response for the application of the precautionary principle in the context of radiological protection. The basic objective of optimisation is, for any source within the system of radiological protection, to maintain the level of exposure as low as reasonably achievable, taking into account social and economical factors. Methods tools and procedures have been developed over the last two decades to put into practice the optimisation principle with a central role given to the cost-benefit analysis as a means to determine the optimised level of protection. However, with the advancement in the implementation of the principle more emphasis was progressively given to good practice, as well as on the importance of controlling individual levels of exposure through the optimisation process. In the context of the revision of its present recommendations, the Commission is reenforcing the emphasis on protection of the individual with the adoption of an equity-based system that recognizes individual rights and a basic level of health protection. Another advancement is the role that is now recognised to 'stakeholders involvement' in the optimisation process as a mean to improve the quality of the decision aiding process for identifying and selecting protection actions considered as being accepted by all those involved. The paper

  17. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi

    2012-09-04

    -averaged thrust, while the average aerodynamic power is increased. Furthermore, increasing the number of variables (i.e., providing the wing shape with greater degrees of spatial freedom) is observed to enable superior designs. To gain a better understanding of the reasons for which the obtained optimised shapes produce efficient flapping flights, the wake pattern and its vorticity strength are examined. This work described in this paper should facilitate better guidance for shape design of engineered flying systems.

  18. Dose optimisation in single plane interstitial brachytherapy

    DEFF Research Database (Denmark)

    Tanderup, Kari; Hellebust, Taran Paulsen; Honoré, Henriette Benedicte

    2006-01-01

    patients,       treated for recurrent rectal and cervical cancer, flexible catheters were       sutured intra-operatively to the tumour bed in areas with compromised       surgical margin. Both non-optimised, geometrically and graphically       optimised CT -based dose plans were made. The overdose index...... on the       regularity of the implant, such that the benefit of optimisation was       larger for irregular implants. OI and HI correlated strongly with target       volume limiting the usability of these parameters for comparison of dose       plans between patients. CONCLUSIONS: Dwell time optimisation significantly......BACKGROUND AND PURPOSE: Brachytherapy dose distributions can be optimised       by modulation of source dwell times. In this study dose optimisation in       single planar interstitial implants was evaluated in order to quantify the       potential benefit in patients. MATERIAL AND METHODS: In 14...

  19. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  20. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  1. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  2. Thermal performance monitoring and optimisation

    International Nuclear Information System (INIS)

    Sunde, Svein; Berg; Oeyvind

    1998-01-01

    Monitoring of the thermal efficiency of nuclear power plants is expected to become increasingly important as energy-market liberalisation exposes plants to increasing availability requirements and fiercer competition. The general goal in thermal performance monitoring is straightforward: to maximise the ratio of profit to cost under the constraints of safe operation. One may perceive this goal to be pursued in two ways, one oriented towards fault detection and cost-optimal predictive maintenance, and another determined at optimising target values of parameters in response to any component degradation detected, changes in ambient conditions, or the like. Annual savings associated with effective thermal-performance monitoring are expected to be in the order of $ 100 000 for power plants of representative size. A literature review shows that a number of computer systems for thermal-performance monitoring exists, either as prototypes or commercially available. The characteristics and needs of power plants may vary widely, however, and decisions concerning the exact scope, content and configuration of a thermal-performance monitor may well follow a heuristic approach. Furthermore, re-use of existing software modules may be desirable. Therefore, we suggest here the design of a flexible workbench for easy assembly of an experimental thermal-performance monitor at the Halden Project. The suggested design draws heavily on our extended experience in implementing control-room systems featured by assets like high levels of customisation, flexibility in configuration and modularity in structure, and on a number of relevant adjoining activities. The design includes a multi-computer communication system and a graphical user's interface, and aims at a system adaptable to any combination of in-house or end user's modules, as well as commercially available software. (author)

  3. A maintenance policy for two-unit parallel systems based on imperfect monitoring information

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Anne [Department Genie des Systems Industiels (GSI), Universite de technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes, Cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, Christophe [Department Genie des Systems Industiels (GSI), Universite de technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes, Cedex (France); Grall, Antoine [Department Genie des Systems Industiels (GSI), Universite de technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes, Cedex (France)

    2006-02-01

    In this paper a maintenance policy is optimised for a two-unit system with a parallel structure and stochastic dependences. Monitoring problems are taken into account in the optimisation scheme: the failure time of each unit can be not detected with a given probability. Conditions on the system parameters (unit failure rates) and on the non-detection probabilities must be verified to make the optimisation scheme valid. These conditions are clearly identified. Numerical experiments allow to show the relevance of taking into account monitoring problems in the maintenance model.

  4. A maintenance policy for two-unit parallel systems based on imperfect monitoring information

    International Nuclear Information System (INIS)

    Barros, Anne; Berenguer, Christophe; Grall, Antoine

    2006-01-01

    In this paper a maintenance policy is optimised for a two-unit system with a parallel structure and stochastic dependences. Monitoring problems are taken into account in the optimisation scheme: the failure time of each unit can be not detected with a given probability. Conditions on the system parameters (unit failure rates) and on the non-detection probabilities must be verified to make the optimisation scheme valid. These conditions are clearly identified. Numerical experiments allow to show the relevance of taking into account monitoring problems in the maintenance model

  5. Parametric studies and optimisation of pumped thermal electricity storage

    International Nuclear Information System (INIS)

    McTigue, Joshua D.; White, Alexander J.; Markides, Christos N.

    2015-01-01

    Highlights: • PTES is modelled by cycle analysis and a Schumann-style model of the thermal stores. • Optimised trade-off surfaces show a flat efficiency vs. energy density profile. • Overall roundtrip efficiencies of around 70% are not inconceivable. - Abstract: Several of the emerging technologies for electricity storage are based on some form of thermal energy storage (TES). Examples include liquid air energy storage, pumped heat energy storage and, at least in part, advanced adiabatic compressed air energy storage. Compared to other large-scale storage methods, TES benefits from relatively high energy densities, which should translate into a low cost per MW h of storage capacity and a small installation footprint. TES is also free from the geographic constraints that apply to hydro storage schemes. TES concepts for electricity storage rely on either a heat pump or refrigeration cycle during the charging phase to create a hot or a cold storage space (the thermal stores), or in some cases both. During discharge, the thermal stores are depleted by reversing the cycle such that it acts as a heat engine. The present paper is concerned with a form of TES that has both hot and cold packed-bed thermal stores, and for which the heat pump and heat engine are based on a reciprocating Joule cycle, with argon as the working fluid. A thermodynamic analysis is presented based on traditional cycle calculations coupled with a Schumann-style model of the packed beds. Particular attention is paid to the various loss-generating mechanisms and their effect on roundtrip efficiency and storage density. A parametric study is first presented that examines the sensitivity of results to assumed values of the various loss factors and demonstrates the rather complex influence of the numerous design variables. Results of an optimisation study are then given in the form of trade-off surfaces for roundtrip efficiency, energy density and power density. The optimised designs show a

  6. Status of achievements reached in applying optimisation of protection in prevention and mitigation of accidents in nuclear facilities

    International Nuclear Information System (INIS)

    Bengtsson, G.; Hoegberg, L.

    1988-01-01

    Optimisation of protection in a broad sense is basically a political undertaking, where the resources put into protection are balanced against other factors - quantifiable and non-quantifiable - to obtain the best protection that can be achieved under the circumstances. In a narrower sense, optimisation can be evaluated in procedures allowing for a few quantifiable factors, such as cost/effectiveness analysis. These procedures are used as inputs to the broader optimisation. The paper discusses several examples from Sweden concerning evaluations and decisions relating to prevention of accidents and mitigation of their consequences. Comparison is made with typical optimisation criteria proposed for radiation protection work and for cost/effective analysis in the USA, notably NUREG-1150 (draft). The examples show that optimisation procedures in a narrower sense have not been decisive. Individual dose limits seem to be increasingly important as compared to collective dose optimisation, and political, commercial or engineering judgements may lead to decisions far away from those suggested by simple optimisation considerations

  7. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  8. Engineered Resistant-Starch (ERS) Diet Shapes Colon Microbiota Profile in Parallel with the Retardation of Tumor Growth in In Vitro and In Vivo Pancreatic Cancer Models

    Science.gov (United States)

    Panebianco, Concetta; Adamberg, Kaarel; Adamberg, Signe; Saracino, Chiara; Jaagura, Madis; Kolk, Kaia; Di Chio, Anna Grazia; Graziano, Paolo; Vilu, Raivo; Pazienza, Valerio

    2017-01-01

    Background/aims: Pancreatic cancer (PC) is ranked as the fourth leading cause of cancer-related deaths worldwide. Despite recent advances in treatment options, a modest impact on the outcome of the disease is observed so far. We have previously demonstrated that short-term fasting cycles have the potential to improve the efficacy of chemotherapy against PC. The aim of this study was to assess the effect of an engineered resistant-starch (ERS) mimicking diet on the growth of cancer cell lines in vitro, on the composition of fecal microbiota, and on tumor growth in an in vivo pancreatic cancer mouse xenograft model. Materials and Methods: BxPC-3, MIA PaCa-2 and PANC-1 cells were cultured in the control, and in the ERS-mimicking diet culturing condition, to evaluate tumor growth and proliferation pathways. Pancreatic cancer xenograft mice were subjected to an ERS diet to assess tumor volume and weight as compared to mice fed with a control diet. The composition and activity of fecal microbiota were further analyzed in growth experiments by isothermal microcalorimetry. Results: Pancreatic cancer cells cultured in an ERS diet-mimicking medium showed decreased levels of phospho-ERK1/2 (extracellular signal-regulated kinase proteins) and phospho-mTOR (mammalian target of rapamycin) levels, as compared to those cultured in standard medium. Consistently, xenograft pancreatic cancer mice subjected to an ERS diet displayed significant retardation in tumor growth. In in vitro growth experiments, the fecal microbial cultures from mice fed with an ERS diet showed enhanced growth on residual substrates, higher production of formate and lactate, and decreased amounts of propionate, compared to fecal microbiota from mice fed with the control diet. Conclusion: A positive effect of the ERS diet on composition and metabolism of mouse fecal microbiota shown in vitro is associated with the decrease of tumor progression in the in vivo PC xenograft mouse model. These results suggest that

  9. Information engineering

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, D.N.

    1997-02-01

    The Information Engineering thrust area develops information technology to support the programmatic needs of Lawrence Livermore National Laboratory`s Engineering Directorate. Progress in five programmatic areas are described in separate reports contained herein. These are entitled Three-dimensional Object Creation, Manipulation, and Transport, Zephyr:A Secure Internet-Based Process to Streamline Engineering Procurements, Subcarrier Multiplexing: Optical Network Demonstrations, Parallel Optical Interconnect Technology Demonstration, and Intelligent Automation Architecture.

  10. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  11. Computers in engineering. 1988

    International Nuclear Information System (INIS)

    Tipnis, V.A.; Patton, E.M.

    1988-01-01

    These proceedings discuss the following subjects: Knowledge base systems; Computers in designing; uses of artificial intelligence; engineering optimization and expert systems of accelerators; and parallel processing in designing

  12. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  13. Optimisation of Investment Resources at Small Enterprises

    Directory of Open Access Journals (Sweden)

    Shvets Iryna B.

    2014-03-01

    Full Text Available The goal of the article lies in the study of the process of optimisation of the structure of investment resources, development of criteria and stages of optimisation of volumes of investment resources for small enterprises by types of economic activity. The article characterises the process of transformation of investment resources into assets and liabilities of the balances of small enterprises and conducts calculation of the structure of sources of formation of investment resources in Ukraine at small enterprises by types of economic activity in 2011. On the basis of the conducted analysis of the structure of investment resources of small enterprises the article forms main groups of criteria of optimisation in the context of individual small enterprises by types of economic activity. The article offers an algorithm and step-by-step scheme of optimisation of investment resources at small enterprises in the form of a multi-stage process of management of investment resources in the context of increase of their mobility and rate of transformation of existing resources into investments. The prospect of further studies in this direction is development of a structural and logic scheme of optimisation of volumes of investment resources at small enterprises.

  14. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  15. Multicriteria Optimisation in Logistics Forwarder Activities

    Directory of Open Access Journals (Sweden)

    Tanja Poletan Jugović

    2007-05-01

    Full Text Available Logistics forwarder, as organizer and planner of coordinationand integration of all the transport and logistics chains elements,uses adequate ways and methods in the process of planningand decision-making. One of these methods, analysed inthis paper, which could be used in optimisation of transportand logistics processes and activities of logistics forwarder, isthe multicriteria optimisation method. Using that method, inthis paper is suggested model of multicriteria optimisation of logisticsforwarder activities. The suggested model of optimisationis justified in keeping with method principles of multicriteriaoptimization, which is included in operation researchmethods and it represents the process of multicriteria optimizationof variants. Among many different processes of multicriteriaoptimization, PROMETHEE (Preference Ranking OrganizationMethod for Enrichment Evaluations and Promcalc& Gaia V. 3.2., computer program of multicriteria programming,which is based on the mentioned process, were used.

  16. Noise aspects at aerodynamic blade optimisation projects

    International Nuclear Information System (INIS)

    Schepers, J.G.

    1997-06-01

    The Netherlands Energy Research Foundation (ECN) has often been involved in industrial projects, in which blade geometries are created automatic by means of numerical optimisation. Usually, these projects aim at the determination of the aerodynamic optimal wind turbine blade, i.e. the goal is to design a blade which is optimal with regard to energy yield. In other cases, blades have been designed which are optimal with regard to cost of generated energy. However, it is obvious that the wind turbine blade designs which result from these optimisations, are not necessarily optimal with regard to noise emission. In this paper an example is shown of an aerodynamic blade optimisation, using the ECN-program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. 11 figs., 8 refs

  17. Topology Optimisation of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Thike Aye Min

    2016-01-01

    Full Text Available Wireless sensor networks are widely used in a variety of fields including industrial environments. In case of a clustered network the location of cluster head affects the reliability of the network operation. Finding of the optimum location of the cluster head, therefore, is critical for the design of a network. This paper discusses the optimisation approach, based on the brute force algorithm, in the context of topology optimisation of a cluster structure centralised wireless sensor network. Two examples are given to verify the approach that demonstrate the implementation of the brute force algorithm to find an optimum location of the cluster head.

  18. Development of parallel/serial program analyzing tool

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa

    1999-03-01

    Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)

  19. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  20. Application of Surpac and Whittle Software in Open Pit Optimisation ...

    African Journals Online (AJOL)

    Application of Surpac and Whittle Software in Open Pit Optimisation and Design. ... This paper studies the Surpac and Whittle software and their application in designing an optimised pit. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  1. (MBO) algorithm in multi-reservoir system optimisation

    African Journals Online (AJOL)

    A comparative study of marriage in honey bees optimisation (MBO) algorithm in ... A practical application of the marriage in honey bees optimisation (MBO) ... to those of other evolutionary algorithms, such as the genetic algorithm (GA), ant ...

  2. Mechanical design of a free-wheel clutch for the thermal engine of a parallel hybrid vehicle with thermal and electrical power-train; Conception mecanique d'un accouplement a roue libre pour le moteur thermique d'un vehicule hybride parallele thermique et electrique

    Energy Technology Data Exchange (ETDEWEB)

    Santin, J.J.

    2001-07-01

    This thesis deals with the design of a free-wheel clutch. This unit is intended to replace the automated dry single-plate clutch of a parallel hybrid car with thermal and electric power-train. Furthermore, the car is a single shaft zero emission vehicle fitted with a controlled gearbox. Chapter one focuses on the type of hybrid vehicle studied. It shows the need to isolate the engine from the rest of the drive train, depending on the driving conditions. Chapter two presents and compares the two alternatives: automated clutch and free-wheel. In order to develop the free-wheel option, the torsional vibrations in the automotive drive line had to be closely studied. It required the design of a specific modular tool, as presented in chapter three, with the help of MATLAB SIMULINK. Lastly, chapter four shows how this tool was used during the design stage and specifies the way to build it. The free-wheel is then to be fitted to a prototype hybrid vehicle, constructed by both the LAMIH and PSA. (author)

  3. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  4. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  5. Optimising agile development practices for the maintenance operation: nine heuristics

    DEFF Research Database (Denmark)

    Heeager, Lise Tordrup; Rose, Jeremy

    2014-01-01

    Agile methods are widely used and successful in many development situations and beginning to attract attention amongst the software maintenance community – both researchers and practitioners. However, it should not be assumed that implementing a well-known agile method for a maintenance department...... is therefore a trivial endeavour - the maintenance operation differs in some important respects from development work. Classical accounts of software maintenance emphasise more traditional software engineering processes, whereas recent research accounts of agile maintenance efforts uncritically focus...... on benefits. In an action research project at Aveva in Denmark we assisted with the optimisation of SCRUM, tailoring the standard process to the immediate needs of the developers. We draw on both theoretical and empirical learning to formulate nine heuristics for maintenance practitioners wishing to go agile....

  6. Optimised to Fail: Card Readers for Online Banking

    Science.gov (United States)

    Drimer, Saar; Murdoch, Steven J.; Anderson, Ross

    The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.

  7. Extending Particle Swarm Optimisers with Self-Organized Criticality

    DEFF Research Database (Denmark)

    Løvbjerg, Morten; Krink, Thiemo

    2002-01-01

    Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.......Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions....

  8. Operational Radiological Protection and Aspects of Optimisation

    International Nuclear Information System (INIS)

    Lazo, E.; Lindvall, C.G.

    2005-01-01

    Since 1992, the Nuclear Energy Agency (NEA), along with the International Atomic Energy Agency (IAEA), has sponsored the Information System on Occupational Exposure (ISOE). ISOE collects and analyses occupational exposure data and experience from over 400 nuclear power plants around the world and is a forum for radiological protection experts from both nuclear power plants and regulatory authorities to share lessons learned and best practices in the management of worker radiation exposures. In connection to the ongoing work of the International Commission on Radiological Protection (ICRP) to develop new recommendations, the ISOE programme has been interested in how the new recommendations would affect operational radiological protection application at nuclear power plants. Bearing in mind that the ICRP is developing, in addition to new general recommendations, a new recommendation specifically on optimisation, the ISOE programme created a working group to study the operational aspects of optimisation, and to identify the key factors in optimisation that could usefully be reflected in ICRP recommendations. In addition, the Group identified areas where further ICRP clarification and guidance would be of assistance to practitioners, both at the plant and the regulatory authority. The specific objective of this ISOE work was to provide operational radiological protection input, based on practical experience, to the development of new ICRP recommendations, particularly in the area of optimisation. This will help assure that new recommendations will best serve the needs of those implementing radiation protection standards, for the public and for workers, at both national and international levels. (author)

  9. Optimisation of surgical care for rectal cancer

    NARCIS (Netherlands)

    Borstlap, W.A.A.

    2017-01-01

    Optimisation of surgical care means weighing the risk of treatment related morbidity against the patients’ potential benefits of a surgical intervention. The first part of this thesis focusses on the anaemic patient undergoing colorectal surgery. Hypothesizing that a more profound haemoglobin

  10. On optimal development and becoming an optimiser

    NARCIS (Netherlands)

    de Ruyter, D.J.

    2012-01-01

    The article aims to provide a justification for the claim that optimal development and becoming an optimiser are educational ideals that parents should pursue in raising their children. Optimal development is conceptualised as enabling children to grow into flourishing persons, that is persons who

  11. Particle Swarm Optimisation with Spatial Particle Extension

    DEFF Research Database (Denmark)

    Krink, Thiemo; Vesterstrøm, Jakob Svaneborg; Riget, Jacques

    2002-01-01

    In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed...

  12. OPTIMISATION OF COMPRESSIVE STRENGTH OF PERIWINKLE ...

    African Journals Online (AJOL)

    In this paper, a regression model is developed to predict and optimise the compressive strength of periwinkle shell aggregate concrete using Scheffe's regression theory. The results obtained from the derived regression model agreed favourably with the experimental data. The model was tested for adequacy using a student ...

  13. An efficient optimisation method in groundwater resource ...

    African Journals Online (AJOL)

    DRINIE

    2003-10-04

    Oct 4, 2003 ... theories developed in the field of stochastic subsurface hydrology. In reality, many ... Recently, some researchers have applied the multi-stage ... Then a robust solution of the optimisation problem given by Eqs. (1) to (3) is as ...

  14. Water distribution systems design optimisation using metaheuristics ...

    African Journals Online (AJOL)

    The topic of multi-objective water distribution systems (WDS) design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary ...

  15. Optimisation of efficiency of axial fans

    NARCIS (Netherlands)

    Kruyt, Nicolaas P.; Pennings, P.C.; Faasen, R.

    2014-01-01

    A three-stage research project has been executed to develop ducted axial-fans with increased efficiency. In the first stage a design method has been developed in which various conflicting design criteria can be incorporated. Based on this design method, an optimised design has been determined

  16. Thermodynamic optimisation of a heat exchanger

    NARCIS (Netherlands)

    Cornelissen, Rene; Hirs, Gerard

    1999-01-01

    The objective of this paper is to show that for the optimal design of an energy system, where there is a trade-off between exergy saving during operation and exergy use during construction of the energy system, exergy analysis and life cycle analysis should be combined. An exergy optimisation of a

  17. Self-optimising control of sewer systems

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane Loft

    2013-01-01

    . The definition of an optimal performance was carried out by through a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow, given the probability of a future rain event. The methodology is successfully applied...

  18. Structural optimisation of a high speed Organic Rankine Cycle generator using a genetic algorithm and a finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Palko, S. [Machines Division, ABB industry Oy, Helsinki (Finland)

    1997-12-31

    The aim in this work is to design a 250 kW high speed asynchronous generator using a genetic algorithm and a finite element method for Organic Rankine Cycle. The characteristics of the induction motors are evaluated using two-dimensional finite element method (FEM) The movement of the rotor and the non-linearity of the iron is included. In numerical field problems it is possible to find several local extreme for an optimisation problem, and therefore the algorithm has to be capable of determining relevant changes, and to avoid trapping to a local minimum. In this work the electromagnetic (EM) losses at the rated point are minimised. The optimisation includes the air gap region. Parallel computing is applied to speed up optimisation. (orig.) 2 refs.

  19. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    African Journals Online (AJOL)

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  20. Design of optimised backstepping controller for the synchronisation ...

    Indian Academy of Sciences (India)

    Ehsan Fouladi

    2017-12-18

    Dec 18, 2017 ... for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller. Keywords. Colpitts oscillator; backstepping controller; chaos synchronisation; shark smell algorithm; particle .... The velocity model is based on the gradient of the objective function, tilting ...

  1. Efficient topology optimisation of multiscale and multiphysics problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...

  2. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  3. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  4. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  5. Sensitivity Filters In Topology Optimisation As A Solution To Helmholtz Type Differential Equation

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Sigmund, Ole

    2009-01-01

    The focus of the study in this article is on the use of a Helmholtz type differential equation as a filter for topology optimisation problems. Until now various filtering schemes have been utilised in order to impose mesh independence in this type of problems. The usual techniques require topology...... information about the neighbour sub-domains is an expensive operation. The proposed filtering technique requires only mesh information necessary for the finite element discretisation of the problem. The main idea is to define the filtered variable implicitly as a solution of a Helmholtz type differential...... equation with homogeneous Neumann boundary conditions. The properties of the filter are demonstrated for various 2D and 3D topology optimisation problems in linear elasticity, solved on sequential and parallel computers....

  6. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  7. Finite element model updating in structural dynamics using design sensitivity and optimisation

    OpenAIRE

    Calvi, Adriano

    1998-01-01

    Model updating is an important issue in engineering. In fact a well-correlated model provides for accurate evaluation of the structure loads and responses. The main objectives of the study were to exploit available optimisation programs to create an error localisation and updating procedure of nite element models that minimises the "error" between experimental and analytical modal data, addressing in particular the updating of large scale nite element models with se...

  8. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  9. PARALLEL SOLUTION METHODS OF PARTIAL DIFFERENTIAL EQUATIONS

    Directory of Open Access Journals (Sweden)

    Korhan KARABULUT

    1998-03-01

    Full Text Available Partial differential equations arise in almost all fields of science and engineering. Computer time spent in solving partial differential equations is much more than that of in any other problem class. For this reason, partial differential equations are suitable to be solved on parallel computers that offer great computation power. In this study, parallel solution to partial differential equations with Jacobi, Gauss-Siedel, SOR (Succesive OverRelaxation and SSOR (Symmetric SOR algorithms is studied.

  10. On Parallel Software Engineering Education Using Python

    Science.gov (United States)

    Marowka, Ami

    2018-01-01

    Python is gaining popularity in academia as the preferred language to teach novices serial programming. The syntax of Python is clean, easy, and simple to understand. At the same time, it is a high-level programming language that supports multi programming paradigms such as imperative, functional, and object-oriented. Therefore, by default, it is…

  11. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  12. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  13. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    Science.gov (United States)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  14. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    International Nuclear Information System (INIS)

    Thøgersen, E; Tranberg, B; Greiner, M; Herp, J

    2017-01-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms. (paper)

  15. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    Science.gov (United States)

    Thøgersen, E.; Tranberg, B.; Herp, J.; Greiner, M.

    2017-05-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms.

  16. Real-time optimisation of the Hoa Binh reservoir, Vietnam

    DEFF Research Database (Denmark)

    Richaud, Bertrand; Madsen, Henrik; Rosbjerg, Dan

    2011-01-01

    -time optimisation. First, the simulation-optimisation framework is applied for optimising reservoir operating rules. Secondly, real-time and forecast information is used for on-line optimisation that focuses on short-term goals, such as flood control or hydropower generation, without compromising the deviation...... in the downstream part of the Red River, and at the same time to increase hydropower generation and to save water for the dry season. The real-time optimisation procedure further improves the efficiency of the reservoir operation and enhances the flexibility for the decision-making. Finally, the quality......Multi-purpose reservoirs often have to be managed according to conflicting objectives, which requires efficient tools for trading-off the objectives. This paper proposes a multi-objective simulation-optimisation approach that couples off-line rule curve optimisation with on-line real...

  17. Techno-economic optimisation of energy systems

    International Nuclear Information System (INIS)

    Mansilla Pellen, Ch.

    2006-07-01

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  18. Pre-operative optimisation of lung function

    Directory of Open Access Journals (Sweden)

    Naheed Azhar

    2015-01-01

    Full Text Available The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function.

  19. Optimised dipper fine tunes shovel performance

    Energy Technology Data Exchange (ETDEWEB)

    Fiscor, S.

    2005-06-01

    Joint efforts between mine operators, OEMs, and researchers yields unexpected benefits from dippers for shovels for coal, oil, or hardrock mining that can now be tailored to meet site-specific conditions. The article outlines a process being developed by CRCMining and P & H MIning Equipment to optimise the dipper that involves rapid prototyping and scale modelling of the dipper and the mine conditions. Scale models have been successfully field tested. 2 photos.

  20. Public transport optimisation emphasising passengers’ travel behaviour.

    OpenAIRE

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    Passengers in public transport complaining about their travel experiences are not uncommon. This might seem counterintuitive since several operators worldwide are presenting better key performance indicators year by year. The present PhD study focuses on developing optimisation algorithms to enhance the operations of public transport while explicitly emphasising passengers’ travel behaviour and preferences. Similar to economic theory, interactions between supply and demand are omnipresent in ...

  1. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  2. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    Science.gov (United States)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  3. Natural Erosion of Sandstone as Shape Optimisation.

    Science.gov (United States)

    Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan

    2017-12-11

    Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.

  4. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez

    2014-09-16

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  5. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez; Khan, Ayaz ul Hassan

    2014-01-01

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  6. Optimisation and symmetry in experimental radiation physics

    International Nuclear Information System (INIS)

    Ghose, A.

    1988-01-01

    The present monograph is concerned with the optimisation of geometric factors in radiation physics experiments. The discussions are essentially confined to those systems in which optimisation is equivalent to symmetrical configurations of the measurement systems. They include, measurements of interaction cross section of diverse types, determination of polarisations, development of detectors with almost ideal characteristics, production of radiations with continuously variable energies and development of high efficiency spectrometers etc. The monograph is intended for use by experimental physicists investigating primary interactions of radiations with matter and associated technologies. We have illustrated the various optimisation procedures by considering the cases of the so-called ''14 MeV'' on d-t neutrons and gamma rays with energies less than 3 MeV. Developments in fusion technology are critically dependent on the availability accurate cross sections of nuclei for fast neutrons of energies at least as high as d-t neutrons. In this monograph we have discussed various techniques which can be used to improve the accuracy of such measurements and have also presented a method for generating almost monoenergetic neutrons in the 8 MeV to 13 MeV energy range which can be used to measure cross sections in this sparingly investigated region

  7. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  8. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  9. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  10. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  11. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  12. Large scale three-dimensional topology optimisation of heat sinks cooled by natural convection

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Sigmund, Ole; Aage, Niels

    2016-01-01

    the Bousinessq approximation. The fully coupled non-linear multiphysics system is solved using stabilised trilinear equal-order finite elements in a parallel framework allowing for the optimisation of large scale problems with order of 20-330 million state degrees of freedom. The flow is assumed to be laminar...... topologies verify prior conclusions regarding fin length/thickness ratios and Biot numbers, but also indicate that carefully tailored and complex geometries may improve cooling behaviour considerably compared to simple heat fin geometries. (C) 2016 Elsevier Ltd. All rights reserved....

  13. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  14. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  15. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  16. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  17. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  18. Impact of Battery Ageing on an Electric Vehicle Powertrain Optimisation

    Directory of Open Access Journals (Sweden)

    Daniel J. Auger

    2014-12-01

    Full Text Available An electric vehicle’s battery is its most expensive component, and it cannot be charged and discharged indefinitely. This affects a consumer vehicle’s end-user value. Ageing is tolerated as an unwanted operational side-effect; manufacturers have little control over it. Recent publications have considered trade-offs between efficiency and ageing in plug-in hybrids (PHEVs but there is no equivalent literature for pure EVs. For PHEVs, battery ageing has been modelled by translating current demands into chemical degradation. Given such models it is possible to produce similar trade-offs for EVs. We consider the effects of varying battery size and introducing a parallel supercapacitor pack. (Supercapacitors can smooth current demands, but their weight and electronics reduce economy. We extend existing EV optimisation techniques to include battery ageing, illustrated with vehicle case studies. We comment on the applicability to similar EV problems and identify where additional research is needed to improve on our assumptions.

  19. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  20. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  1. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  2. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  3. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  4. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    Science.gov (United States)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  5. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  6. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  7. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  8. Mechatronic System Design Based On An Optimisation Approach

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael Rygaard

    The envisaged objective of this paper project is to extend the current state of the art regarding the design of complex mechatronic systems utilizing an optimisation approach. We propose to investigate a novel framework for mechatronic system design. The novelty and originality being the use...... of optimisation techniques. The methods used to optimise/design within the classical disciplines will be identified and extended to mechatronic system design....

  9. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  10. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  11. Flotation process control optimisation at Prominent Hill

    International Nuclear Information System (INIS)

    Lombardi, Josephine; Muhamad, Nur; Weidenbach, M.

    2012-01-01

    OZ Minerals' Prominent Hill copper- gold concentrator is located 130 km south east of the town of Coober Pedy in the Gawler Craton of South Australia. The concentrator was built in 2008 and commenced commercial production in early 2009. The Prominent Hill concentrator is comprised of a conventional grinding and flotation processing plant with a 9.6 Mtpa ore throughput capacity. The flotation circuit includes six rougher cells, an IseMill for regrinding the rougher concentrate and a Jameson cell heading up the three stage conventional cell cleaner circuit. In total there are four level controllers in the rougher train and ten level controllers in the cleaning circuit for 18 cells. Generic proportional — integral and derivative (PID) control used on the level controllers alone propagated any disturbances downstream in the circuit that were generated from the grinding circuit, hoppers, between cells and interconnected banks of cells, having a negative impact on plant performance. To better control such disturbances, FloatStar level stabiliser was selected for installation on the flotation circuit to account for the interaction between the cells. Multivariable control was also installed on the five concentrate hoppers to maintain consistent feed to the cells and to the IsaMill. An additional area identified for optimisation in the flotation circuit was the mass pull rate from the rougher cells. FloatStar flow optimiser was selected to be installed subsequent to the FloatStar level stabiliser. This allowed for a unified, consistent and optimal approach to running the rougher circuit. This paper describes the improvement in the stabilisation of the circuit achieved by the FloatStar level stabiliser by using the interaction matrix between cell level controllers and the results and benefits of implementing the FloatStar flow optimiser on the rougher train.

  12. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  13. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Science.gov (United States)

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  14. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    Directory of Open Access Journals (Sweden)

    Kian Sheng Lim

    2013-01-01

    Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  15. Optimising end of generation of Magnox reactors

    International Nuclear Information System (INIS)

    Hall, D.; Hopper, E.D.A.

    2014-01-01

    Designing, justifying and gaining regulatory approval for optimised, terminal fuel cycles for the last 4 of the 13 strong Magnox Fleet is described, covering: - constraints set by the plant owner's integrated closure plan, opportunities for innovative fuel cycles while preserving flexibility to respond to business changes; - methods of collectively determining best options for each site; - selected strategies including lower fuel element retention and inter-reactor transfer of fuel; - the required work scope, its technical, safety case and resource challenges and how they were met; - achieving additional electricity generation worth in excess of Pound 1 b from 4 sites (a total of 8 reactors); - the keys to success. (authors)

  16. Advanced manufacturing: optimising the factories of tomorrow

    International Nuclear Information System (INIS)

    Philippon, Patrick

    2013-01-01

    Faced with competition Patrick Philippon - Les Defis du CEA no.179 - April 2013 from the emerging countries, the competitiveness of the industrialised nations depends on the ability of their industries to innovate. This strategy necessarily entails the reorganisation and optimisation of the production systems. This is the whole challenge for 'advanced manufacturing', which relies on the new information and communication technologies. Interactive robotics, virtual reality and non-destructive testing are all technological building blocks developed by CEA, now approved within a cross-cutting programme, to meet the needs of industry and together build the factories of tomorrow. (author)

  17. Specification, Verification and Optimisation of Business Processes

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas

    is extended with stochastic branching, message passing and reward annotations which allow for the modelling of resources consumed during the execution of a business process. Further, it is shown how this structure can be used to formalise the established business process modelling language Business Process...... fault tree analysis and the automated optimisation of business processes by means of an evolutionary algorithm. This work is motivated by problems that stem from the healthcare sector, and examples encountered in this field are used to illustrate these developments....

  18. Cost optimisation studies of high power accelerators

    Energy Technology Data Exchange (ETDEWEB)

    McAdams, R.; Nightingale, M.P.S.; Godden, D. [AEA Technology, Oxon (United Kingdom)] [and others

    1995-10-01

    Cost optimisation studies are carried out for an accelerator based neutron source consisting of a series of linear accelerators. The characteristics of the lowest cost design for a given beam current and energy machine such as power and length are found to depend on the lifetime envisaged for it. For a fixed neutron yield it is preferable to have a low current, high energy machine. The benefits of superconducting technology are also investigated. A Separated Orbit Cyclotron (SOC) has the potential to reduce capital and operating costs and intial estimates for the transverse and longitudinal current limits of such machines are made.

  19. HVAC system optimisation-in-building section

    Energy Technology Data Exchange (ETDEWEB)

    Lu, L.; Cai, W.; Xie, L.; Li, S.; Soh, Y.C. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore (Singapore)

    2004-07-01

    This paper presents a practical method to optimise in-building section of centralised Heating, Ventilation and Air-Conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimisation method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  20. Optimisation of milling parameters using neural network

    Directory of Open Access Journals (Sweden)

    Lipski Jerzy

    2017-01-01

    Full Text Available The purpose of this study was to design and test an intelligent computer software developed with the purpose of increasing average productivity of milling not compromising the design features of the final product. The developed system generates optimal milling parameters based on the extent of tool wear. The introduced optimisation algorithm employs a multilayer model of a milling process developed in the artificial neural network. The input parameters for model training are the following: cutting speed vc, feed per tooth fz and the degree of tool wear measured by means of localised flank wear (VB3. The output parameter is the surface roughness of a machined surface Ra. Since the model in the neural network exhibits good approximation of functional relationships, it was applied to determine optimal milling parameters in changeable tool wear conditions (VB3 and stabilisation of surface roughness parameter Ra. Our solution enables constant control over surface roughness parameters and productivity of milling process after each assessment of tool condition. The recommended parameters, i.e. those which applied in milling ensure desired surface roughness and maximal productivity, are selected from all the parameters generated by the model. The developed software may constitute an expert system supporting a milling machine operator. In addition, the application may be installed on a mobile device (smartphone, connected to a tool wear diagnostics instrument and the machine tool controller in order to supply updated optimal parameters of milling. The presented solution facilitates tool life optimisation and decreasing tool change costs, particularly during prolonged operation.

  1. Noise aspects at aerodynamic blade optimisation projects

    Energy Technology Data Exchange (ETDEWEB)

    Schepers, J.G. [Netherlands Energy Research Foundation, Petten (Netherlands)

    1997-12-31

    This paper shows an example of an aerodynamic blade optimisation, using the program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. The aerodynamic optimised geometry from PVOPT is the `real` optimum (up to the latest decimal). The most important conclusion from this study is, that it is worthwhile to investigate the behaviour of the objective function (in the present case the energy yield) around the optimum: If the optimum is flat, there is a possibility to apply modifications to the optimum configuration with only a limited loss in energy yield. It is obvious that the modified configurations emits a different (and possibly lower) noise level. In the BLADOPT program (the successor of PVOPT) it will be possible to quantify the noise level and hence to assess the reduced noise emission more thoroughly. At present the most promising approaches for noise reduction are believed to be a reduction of the rotor speed (if at all possible), and a reduction of the tip angle by means of low lift profiles, or decreased twist at the outboard stations. These modifications were possible without a significant loss in energy yield. (LN)

  2. Direct comparasion of an engine working under Otto, Miller end Diesel cycles : thermodynamic analysis and real engine performance

    OpenAIRE

    Ribeiro, Bernardo Sousa; Martins, Jorge

    2007-01-01

    One of the ways to improve thermodynamic efficiency of Spark Ignition engines is by the optimisation of valve timing and lift and compression ratio. The throttleless engine and the Miller cycle engine are proven concepts for efficiency improvements of such engines. This paper reports on an engine with variable valve timing (VVT) and variable compression ratio (VCR) in order to fulfill such an enhancement of efficiency. Engine load is controlled by the valve opening per...

  3. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  4. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  5. Suppression of plasma turbulence during optimised shear configurations in JET

    International Nuclear Information System (INIS)

    Conway, G.D.; Borba, D.N.; Alper, B.

    1999-08-01

    throughout the plasma as the radial location of the cutoff layer depends on the launched microwave frequency, the toroidal magnetic field B T , plasma current I p , and plasma density n e . Reflectometers are primarily sensitive to long wavelength transverse fluctuations, i.e. wavelengths greater than the beam radius w. For the JET reflectometers the w ∼ 5 cm and so are predominately sensitive to wavenumbers k perpendicular -1 . Spatially, the turbulence in optimised shear discharges can be separated into three regions: outside the ITB (edge), within the ITB gradient, and inside the ITB (core). The turbulence behaves differently in each region. The core turbulence (ITB and within) evolves through four distinct phases. (1) Ohmic breakdown. (2) L-mode pre-heat, using Ion Cyclotron Resonance Heating (ICRH) to slow the current penetration and control the q profile evolution. (3) Main heating using combined co-injected (parallel to I p ) Neutral Beam Injection (NBI) and ICRH, and (4) the ITB formation. The edge turbulence by contrast shows little variation as the discharge evolves. (author)

  6. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  7. Optimising polarised neutron scattering measurements--XYZ and polarimetry analysis

    International Nuclear Information System (INIS)

    Cussen, L.D.; Goossens, D.J.

    2002-01-01

    The analytic optimisation of neutron scattering measurements made using XYZ polarisation analysis and neutron polarimetry techniques is discussed. Expressions for the 'quality factor' and the optimum division of counting time for the XYZ technique are presented. For neutron polarimetry the optimisation is identified as analogous to that for measuring the flipping ratio and reference is made to the results already in the literature

  8. Optimising polarised neutron scattering measurements--XYZ and polarimetry analysis

    CERN Document Server

    Cussen, L D

    2002-01-01

    The analytic optimisation of neutron scattering measurements made using XYZ polarisation analysis and neutron polarimetry techniques is discussed. Expressions for the 'quality factor' and the optimum division of counting time for the XYZ technique are presented. For neutron polarimetry the optimisation is identified as analogous to that for measuring the flipping ratio and reference is made to the results already in the literature.

  9. Application of ant colony optimisation in distribution transformer sizing

    African Journals Online (AJOL)

    This study proposes an optimisation method for transformer sizing in power system using ant colony optimisation and a verification of the process by MATLAB software. The aim is to address the issue of transformer sizing which is a major challenge affecting its effective performance, longevity, huge capital cost and power ...

  10. Multi-objective evolutionary optimisation for product design and manufacturing

    CERN Document Server

    2011-01-01

    Presents state-of-the-art research in the area of multi-objective evolutionary optimisation for integrated product design and manufacturing Provides a comprehensive review of the literature Gives in-depth descriptions of recently developed innovative and novel methodologies, algorithms and systems in the area of modelling, simulation and optimisation

  11. Design Optimisation and Conrol of a Pilot Operated Seat Valve

    DEFF Research Database (Denmark)

    Nielsen, Brian; Andersen, Torben Ole; Hansen, Michael Rygaard

    2004-01-01

    The paper gives an approach for optimisation of the bandwidth of a pilot operated seat valve for mobile applications. Physical dimensions as well as parameters of the implemented control loop are optimised simultaneously. The frequency response of the valve varies as a function of the pressure drop...

  12. DACIA LOGAN LIVE AXLE OPTIMISATION USING COMPUTER GRAPHICS

    Directory of Open Access Journals (Sweden)

    KIRALY Andrei

    2017-05-01

    Full Text Available The paper presents some contributions to the calculus and optimisation of a live axle used at Dacia Logan using computer graphics software for creating the model and afterwards using FEA evaluation to determine the effectiveness of the optimisation. Thus using specialized computer software, a simulation is made and the results were compared to the measured real prototype.

  13. Adjoint Optimisation of the Turbulent Flow in an Annular Diffuser

    DEFF Research Database (Denmark)

    Gotfredsen, Erik; Agular Knudsen, Christian; Kunoy, Jens Dahl

    2017-01-01

    In the present study, a numerical optimisation of guide vanes in an annular diffuser, is performed. The optimisation is preformed for the purpose of improving the following two parameters simultaneously; the first parameter is the uniformity perpen-dicular to the flow direction, a 1/3 diameter do...

  14. Optimising of Steel Fiber Reinforced Concrete Mix Design | Beddar ...

    African Journals Online (AJOL)

    Optimising of Steel Fiber Reinforced Concrete Mix Design. ... as a result of the loss of mixture workability that will be translated into a difficult concrete casting in site. ... An experimental study of an optimisation method of fibres in reinforced ...

  15. GAOS: Spatial optimisation of crop and nature within agricultural fields

    NARCIS (Netherlands)

    Bruin, de S.; Janssen, H.; Klompe, A.; Lerink, P.; Vanmeulebrouk, B.

    2010-01-01

    This paper proposes and demonstrates a spatial optimiser that allocates areas of inefficient machine manoeuvring to field margins thus improving the use of available space and supporting map-based Controlled Traffic Farming. A prototype web service (GAOS) allows farmers to optimise tracks within

  16. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  17. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  18. Numerical Forming Simulations and Optimisation in Advanced Materials

    International Nuclear Information System (INIS)

    Huetink, J.; Boogaard, A. H. van den; Geijselears, H. J. M.; Meinders, T.

    2007-01-01

    With the introduction of new materials as high strength steels, metastable steels and fibre reinforced composites, the need for advanced physically valid constitutive models arises. In finite deformation problems constitutive relations are commonly formulated in terms the Cauchy stress as a function of the elastic Finger tensor and an objective rate of the Cauchy stress as a function of the rate of deformation tensor. For isotropic materials models this is rather straightforward, but for anisotropic material models, including elastic anisotropy as well as plastic anisotropy, this may lead to confusing formulations. It will be shown that it is more convenient to define the constitutive relations in terms of invariant tensors referred to the deformed metric. Experimental results are presented that show new combinations of strain rate and strain path sensitivity. An adaptive through- thickness integration scheme for plate elements is developed, which improves the accuracy of spring back prediction at minimal costs. A procedure is described to automatically compensate the CAD tool shape numerically to obtain the desired product shape. Forming processes need to be optimized for cost saving and product improvement. Until recently, a trial-and-error process in the factory primarily did this optimization. An optimisation strategy is proposed that assists an engineer to model an optimization problem that suits his needs, including an efficient algorithm for solving the problem

  19. Results of the 2010 IGSC Topical Session on Optimisation

    International Nuclear Information System (INIS)

    Bailey, Lucy

    2014-01-01

    Document available in abstract form only. Full text follows: The 2010 IGSC topical session on optimisation explored a wide range of issues concerning optimisation throughout the radioactive waste management process. Philosophical and ethical questions were discussed, such as: - To what extent is the process of optimisation more important than the end result? - How do we balance long-term environmental safety with near-term operational safety? - For how long should options be kept open? - In balancing safety and excessive cost, when is BAT achieved and who decides on this? * How should we balance the needs of current society with those of future generations? It was clear that optimisation is about getting the right balance between a range of issues that cover: radiation protection, environmental protection, operational safety, operational requirements, social expectations and cost. The optimisation process will also need to respect various constraints, which are likely to include: regulatory requirements, site restrictions, community-imposed requirements or restrictions and resource constraints. These issues were explored through a number of presentations that discussed practical cases of optimisation occurring at different stages of international radioactive waste management programmes. These covered: - Operations and decommissioning - management of large disused components, from the findings of an international study, presented by WPDD; - Concept option selection, prior to site selection - upstream and disposal system optioneering in the UK; - Siting decisions - examples from both Germany and France, explaining how optimisation is being used to support site comparisons and communicate siting decisions; - Repository design decisions - comparison of KBS-3 horizontal and vertical deposition options in Finland; and - On-going optimisation during repository operation - operational experience from WIPP in the US. The variety of the remarks and views expressed during the

  20. Work management to optimise occupational radiological protection

    International Nuclear Information System (INIS)

    Ahier, B.

    2009-01-01

    Although work management is no longer a new concept, continued efforts are still needed to ensure that good performance, outcomes and trends are maintained in the face of current and future challenges. The ISOE programme thus created an Expert Group on Work Management in 2007 to develop an updated report reflecting the current state of knowledge, technology and experience in the occupational radiological protection of workers at nuclear power plants. Published in 2009, the new ISOE report on Work Management to Optimise Occupational Radiological Protection in the Nuclear Power Industry provides up-to-date practical guidance on the application of work management principles. Work management measures aim at optimising occupational radiological protection in the context of the economic viability of the installation. Important factors in this respect are measures and techniques influencing i) dose and dose rate, including source- term reduction; ii) exposure, including amount of time spent in controlled areas for operations; and iii) efficiency in short- and long-term planning, worker involvement, coordination and training. Equally important due to their broad, cross-cutting nature are the motivational and organisational arrangements adopted. The responsibility for these aspects may reside in various parts of an installation's organisational structure, and thus, a multi-disciplinary approach must be recognised, accounted for and well-integrated in any work. Based on the operational experience within the ISOE programme, the following key areas of work management have been identified: - regulatory aspects; - ALARA management policy; - worker involvement and performance; - work planning and scheduling; - work preparation; - work implementation; - work assessment and feedback; - ensuring continuous improvement. The details of each of these areas are elaborated and illustrated in the report through examples and case studies arising from ISOE experience. They are intended to

  1. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  2. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  3. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  4. A comparison of forward planning and optimised inverse planning

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony; Webb, Steve

    1995-01-01

    A radiotherapy treatment plan optimisation algorithm has been applied to 48 prostate plans and the results compared with those of an experienced human planner. Twelve patients were used in the study, and a 3, 4, 6 and 8 field plan (with standard coplanar beam angles for each plan type) were optimised by both the human planner and the optimisation algorithm. The human planner 'optimised' the plan by conventional forward planning techniques. The optimisation algorithm was based on fast-simulated-annealing. 'Importance factors' assigned to different regions of the patient provide a method for controlling the algorithm, and it was found that the same values gave good results for almost all plans. The plans were compared on the basis of dose statistics and normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results show that the optimisation algorithm yielded results that were at least as good as the human planner for all plan types, and on the whole slightly better. A study of the beam-weights chosen by the optimisation algorithm and the planner will be presented. The optimisation algorithm showed greater variation, in response to individual patient geometry. For simple (e.g. 3 field) plans it was found to consistently achieve slightly higher TCP and lower NTCP values. For more complicated (e.g. 8 fields) plans the optimisation also achieved slightly better results with generally less numbers of beams. The optimisation time was always ≤5 minutes; a factor of up to 20 times faster than the human planner

  5. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    DEFF Research Database (Denmark)

    Helle, K.B.; Müller, T.O.; Astrup, Poul

    2014-01-01

    of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64......Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often...... source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given...

  6. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    Science.gov (United States)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  7. VEHICLE DRIVING CYCLE OPTIMISATION ON THE HIGHWAY

    Directory of Open Access Journals (Sweden)

    Zinoviy STOTSKO

    2016-06-01

    Full Text Available This paper is devoted to the problem of reducing vehicle energy consumption. The authors consider the optimisation of highway driving cycle a way to use the kinetic energy of a car more effectively at various road conditions. The model of a vehicle driving control at the highway which consists of elementary cycles, such as accelerating, free rolling and deceleration under forces of external resistance, was designed. Braking, as an energy dissipation regime, was not included. The influence of the various longitudinal profiles of the road was taken into consideration and included in the model. Ways to use the results of monitoring road and traffic conditions are presented. The method of non-linear programming is used to design the optimal vehicle control function and phase trajectory. The results are presented by improved typical driving cycles that present energy saving as a subject of choice at a specified schedule.

  8. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  9. Optimisation and constraints - a view from ICRP

    International Nuclear Information System (INIS)

    Dunster, H.J.

    1994-01-01

    The optimisation of protection has been the major policy underlying the recommendations of the International Commission on Radiological Protection for more than 20 years. In earlier forms, the concept can be traced back to 1951. Constraints are more recent, appearing in their present form only in the 1990 recommendations of the Commission. The requirement to keep all exposures as low as reasonably achievable applies to both normal and potential exposures. The policy and the techniques are well established for normal exposures, i.e. exposures that are certain to occur. The application to potential exposures, i.e. exposures that have a probability of occurring that is less than unity, is more difficult and is still under international discussion. Constraints are needed to limit the inequity associated with the use of collective dose in cost-benefit analysis and to provide a margin to protect individuals who may be exposed to more than one source. (author)

  10. Optimising Impact in Astronomy for Development Projects

    Science.gov (United States)

    Grant, Eli

    2015-08-01

    Positive outcomes in the fields of science education and international development are notoriously difficult to achieve. Among the challenges facing projects that use astronomy to improve education and socio-economic development is how to optimise project design in order to achieve the greatest possible benefits. Over the past century, medical scientists along with statisticians and economists have progressed an increasingly sophisticated and scientific approach to designing, testing and improving social intervention and public health education strategies. This talk offers a brief review of the history and current state of `intervention science'. A similar framework is then proposed for astronomy outreach and education projects, with applied examples given of how existing evidence can be used to inform project design, predict and estimate cost-effectiveness, minimise the risk of unintended negative consequences and increase the likelihood of target outcomes being achieved.

  11. Public transport optimisation emphasising passengers’ travel behaviour

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo

    to the case where the two problems are solved sequentially without taking into account interdependencies. Figure 1 - Planning public transport The PhD study develops a metaheuristic algorithm to adapt the line plan configuration in order better to match passengers’ travel demand in terms of transfers as well......Passengers in public transport complaining about their travel experiences are not uncommon. This might seem counterintuitive since several operators worldwide are presenting better key performance indicators year by year. The present PhD study focuses on developing optimisation algorithms...... to enhance the operations of public transport while explicitly emphasising passengers’ travel behaviour and preferences. Similar to economic theory, interactions between supply and demand are omnipresent in the context of public transport operations. In public transport, the demand is represented...

  12. Value Chain Optimisation of Biogas Production

    DEFF Research Database (Denmark)

    Jensen, Ida Græsted

    economically feasible. In this PhD thesis, the focus is to create models for investigating the profitability of biogas projects by: 1) including the whole value chain in a mathematical model and considering mass and energy changes on the upstream part of the chain; and 2) including profit allocation in a value......, the costs on the biogas plant has been included in the model using economy of scale. For the second point, a mathematical model considering profit allocation was developed applying three allocation mechanisms. This mathematical model can be applied as a second step after the value chain optimisation. After...... in the energy systems model to find the optimal end use of each type of gas and fuel. The main contributions of this thesis are the methods developed on plant level. Both the mathematical model for the value chain and the profit allocation model can be generalised and used in other industries where mass...

  13. Expect systems and optimisation in process control

    Energy Technology Data Exchange (ETDEWEB)

    Mamdani, A.; Efstathiou, J. (eds.)

    1986-01-01

    This report brings together recent developments both in expert systems and in optimisation, and deals with current applications in industry. Part One is concerned with Artificial Intellegence in planning and scheduling and with rule-based control implementation. The tasks of control maintenance, rescheduling and planning are each discussed in relation to new theoretical developments, techniques available, and sample applications. Part Two covers model based control techniques in which the control decisions are used in a computer model of the process. Fault diagnosis, maintenance and trouble-shooting are just some of the activities covered. Part Three contains case studies of projects currently in progress, giving details of the software available and the likely future trends. One of these, on qualitative plant modelling as a basis for knowledge-based operator aids in nuclear power stations is indexed separately.

  14. Expert systems and optimisation in process control

    International Nuclear Information System (INIS)

    Mamdani, A.; Efstathiou, J.

    1986-01-01

    This report brings together recent developments both in expert systems and in optimisation, and deals with current applications in industry. Part One is concerned with Artificial Intellegence in planning and scheduling and with rule-based control implementation. The tasks of control maintenance, rescheduling and planning are each discussed in relation to new theoretical developments, techniques available, and sample applications. Part Two covers model based control techniques in which the control decisions are used in a computer model of the process. Fault diagnosis, maintenance and trouble-shooting are just some of the activities covered. Part Three contains case studies of projects currently in progress, giving details of the software available and the likely future trends. One of these, on qualitative plant modelling as a basis for knowledge-based operator aids in nuclear power stations is indexed separately. (author)

  15. Improving and optimising road pricing in Copenhagen

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Larsen, Marie Karen

    2008-01-01

    though quite a number of proposed charging systems have been examined only a few pricing strategies have been investigated. This paper deals with the optimisation of different designs for a road pricing system in the Greater Copenhagen area with respect to temporal and spatial differentiation......The question whether to introduce toll rings or road pricing in Copenhagen has been discussed intensively during the last 10 years. The main results of previous analyses are that none of the systems would make a positive contribution at present, when considered from a socio-economic view. Even...... of the pricing levels. A detailed transport model was used to describe the demand effects. The model was based on data from a real test of road pricing on 500 car drivers. The paper compares the price systems with regard to traffic effects and generalised costs for users and society. It is shown how important...

  16. A code for optimising triplet layout

    CERN Document Server

    AUTHOR|(CDS)2141109; Seryi, Andrei; Abelleira, Jose; Cruz Alaniz, Emilia

    2017-01-01

    One of the main challenges when designing final focus systems of particle accelerators is maximising the beam stay clear in the strong quadrupole magnets of the inner triplet. Moreover it is desirable to keep the quadrupoles in the inner triplet as short as possible for space and costs reasons but also to reduce chromaticity and simplify corrections schemes. An algorithm that explores the triplet parameter space to optimise both these aspects was written. It uses thin lenses as a first approximation for a broad parameter scan and MADX for more precise calculations. The thin lens algorithm is significantly faster than a full scan using MADX and relatively precise at indicating the approximate area where the optimum solution lies.

  17. Optimising Signalised Intersection Using Wireless Vehicle Detectors

    DEFF Research Database (Denmark)

    Adjin, Daniel Michael Okwabi; Torkudzor, Moses; Asare, Jack

    Traffic congestion on roads wastes travel times. In this paper, we developed a vehicular traffic model to optimise a signalised intersection in Accra, using wireless vehicle detectors. Traffic volume gathered was extrapolated to cover 2011 and 2016 and were analysed to obtain the peak hour traffic...... volume causing congestion. The intersection was modelled and simulated in Synchro7 as an actuated signalised model using results from the analysed data. The model for morning peak periods gave optimal cycle lengths of 100s and 150s with corresponding intersection delay of 48.9s and 90.6s in 2011 and 2016...... respectively while that for the evening was 55s giving delay of 14.2s and 16.3s respectively. It is shown that the model will improve traffic flow at the intersection....

  18. Dynamic optimisation of an industrial web process

    Directory of Open Access Journals (Sweden)

    M Soufian

    2008-09-01

    Full Text Available An industrial web process has been studied and it is shown that theunderlying physics of such processes governs by the Navier-Stokes partialdifferential equations with moving boundary conditions, which in turn have tobe determined by the solution of the thermodynamics equations. Thedevelopment of a two-dimensional continuous-discrete model structurebased on this study is presented. Other models are constructed based onthis model for better identification and optimisation purposes. Theparameters of the proposed models are then estimated using real dataobtained from the identification experiments with the process plant. Varioussimulation tests for validation are accompanied with the design, developmentand real-time industrial implementation of an optimal controller for dynamicoptimisation of this web process. It is shown that in comparison with thetraditional controller, the new controller resulted in a better performance, animprovement in film quality and saving in raw materials. This demonstrates theefficiency and validation of the developed models.

  19. Recent perspectives on optimisation of radiological protection

    International Nuclear Information System (INIS)

    Robb, J.D.; Croft, J.R.

    1992-01-01

    The ALARA principle as a requirement in radiological protection has evolved from its theoretical roots. Based on several years work, this paper provides a backdrop to practical approaches to ALARA for the 1990s. The key step, developing ALARA thinking so that it becomes an integral part of radiological protection programmes, is discussed using examples from the UK and France, as is the role of tools to help standardise judgements for decision-making. In its latest recommendations, ICRP have suggested that the optimisation of protection should be constrained by restrictions on the doses to individuals. This paper also considers the function of such restrictions for occupational, public and medical exposure, and in the design process. (author)

  20. Optimisation of parameters of DCD for PHWRs

    International Nuclear Information System (INIS)

    Velmurugan, S.; Sathyaseelan, V.S.; Narasimhan, S.V.; Mathur, P.K.

    1991-01-01

    Decontamination formulation based on EDTA, Oxalic acid, Citric acid was evaluated for its efficacy in removing oxide layers of PHWR. An ion exchange system which was specifically suitable for fission product dominated contamination in PHWRs was optimised for the reagent regeneration stage of the decontamination process. An analysis of the nature of the complexed metal species formed in the dissolution process and Electrochemical measurements were employed as a tool to follow the course of oxide removal during the dissolution process. An attempt was made to understand the redeposition behaviour of various isotopes during the decontamination process. SEM and ESCA studies of metal coupons before and after the dissolution process were used to analyse the deposits in the above context. The pick up of DCD reagents on the ion exchangers and material compatibility tests on Carbon steel, Monel-400 and Zircaloy-2 with the decontaminant under the conditions of decontamination experiment are reported. (author)

  1. Optimisation of Inulinase Production by Kluyveromyces bulgaricus

    Directory of Open Access Journals (Sweden)

    Darija Vranešić

    2002-01-01

    Full Text Available The present work is based on observation of the effects of pH and temperature of fermentation on the production of microbial enzyme inulinase by Kluyveromyces marxianus var. bulgaricus. Inulinase hydrolyzes inulin, a polysaccharide which can be isolated from plants such as Jerusalem artichoke, chicory or dahlia, and transformed into pure fructose or fructooligosaccharides. Fructooligosaccharides have great potential in food industry because they can be used as calorie-reduced compounds and noncariogenic sweeteners as well as soluble fibre and prebiotic compounds. Fructose formation from inulin is a single step enzymatic reaction and yields are up to 95 % the fructose. On the contrary, conventional fructose production from starch needs at least three enzymatic steps, yielding only 45 % of fructose. The process of inulinase production was optimised by using experimental design method. pH value of the cultivation medium showed to be the most significant variable and it should be maintained at optimum value of 3.6. The effect of temperature was slightly lower and optimal values were between 30 and 33 °C. At a low pH value of the cultivation medium, the microorganism was not able to producem enough enzyme and enzyme activities were low. Similar effect was caused by high temperature. The highest values of enzyme activities were achieved at optimal fermentation conditions and the values were: 100.16–124.36 IU/mL (with sucrose as substrate for determination of enzyme activity or 8.6–11.6 IU/mL (with inulin as substrate, respectively. The method of factorial design and response surface analysis makes it possible to study several factors simultaneously, to quantify the individual effect of each factor and to investigate their possible interactions. As a comparison to this method, optimisation of a physiological enzyme activity model depending on pH and temperature was also studied.

  2. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    Science.gov (United States)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  3. Model-based online optimisation. Pt. 1: active learning; Modellbasierte Online-Optimierung moderner Verbrennungsmotoren. T. 1: Aktives Lernen

    Energy Technology Data Exchange (ETDEWEB)

    Poland, J.; Knoedler, K.; Zell, A. [Tuebingen Univ. (Germany). Lehrstuhl fuer Rechnerarchitektur; Fleischhauer, T.; Mitterer, A.; Ullmann, S. [BMW Group (Germany)

    2003-05-01

    This two-part article presents the model-based optimisation algorithm ''mbminimize''. It was developed in a corporate project of the University Tuebingen and the BMW Group for the purpose of optimising internal combustion engines online on the engine test bed. The first part concentrates on the basic algorithmic design, as well as on modelling, experimental design and active learning. The second part will discuss strategies for dealing with limits such as knocking. (orig.) [German] Dieser zweiteilige Beitrag stellt den modellbasierten Optimierungsalgorithmus ''mbminimize'' vor, der in Kooperation von der Universitaet Tuebingen und der BMW Group fuer die Online-Optimierung von Verbrennungsmotoren entwickelt wurde. Der vorliegende erste Teil konzentriert sich auf das grundlegende algorithmische Design, auf Modellierung, Versuchsplanung und aktives Lernen. Der zweite Teil diskutiert Strategien zur Behandlung von Limits wie Motorklopfen.

  4. Optimisation multi-objectif des systemes energetiques

    Science.gov (United States)

    Dipama, Jean

    The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic

  5. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  6. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  7. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  8. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  9. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  10. OpenMP Issues Arising in the Development of Parallel BLAS and LAPACK Libraries

    Directory of Open Access Journals (Sweden)

    C. Addison

    2003-01-01

    Full Text Available Dense linear algebra libraries need to cope efficiently with a range of input problem sizes and shapes. Inherently this means that parallel implementations have to exploit parallelism wherever it is present. While OpenMP allows relatively fine grain parallelism to be exploited in a shared memory environment it currently lacks features to make it easy to partition computation over multiple array indices or to overlap sequential and parallel computations. The inherent flexible nature of shared memory paradigms such as OpenMP poses other difficulties when it becomes necessary to optimise performance across successive parallel library calls. Notions borrowed from distributed memory paradigms, such as explicit data distributions help address some of these problems, but the focus on data rather than work distribution appears misplaced in an SMP context.

  11. Profile control studies for JET optimised shear regime

    Energy Technology Data Exchange (ETDEWEB)

    Litaudon, X.; Becoulet, A.; Eriksson, L.G.; Fuchs, V.; Huysmans, G.; How, J.; Moreau, D.; Rochard, F.; Tresset, G.; Zwingmann, W. [Association Euratom-CEA, CEA/Cadarache, Dept. de Recherches sur la Fusion Controlee, DRFC, 13 - Saint-Paul-lez-Durance (France); Bayetti, P.; Joffrin, E.; Maget, P.; Mayorat, M.L.; Mazon, D.; Sarazin, Y. [JET Abingdon, Oxfordshire (United Kingdom); Voitsekhovitch, I. [Universite de Provence, LPIIM, Aix-Marseille 1, 13 (France)

    2000-03-01

    This report summarises the profile control studies, i.e. preparation and analysis of JET Optimised Shear plasmas, carried out during the year 1999 within the framework of the Task-Agreement (RF/CEA/02) between JET and the Association Euratom-CEA/Cadarache. We report on our participation in the preparation of the JET Optimised Shear experiments together with their comprehensive analyses and the modelling. Emphasis is put on the various aspects of pressure profile control (core and edge pressure) together with detailed studies of current profile control by non-inductive means, in the prospects of achieving steady, high performance, Optimised Shear plasmas. (authors)

  12. Program For Parallel Discrete-Event Simulation

    Science.gov (United States)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  13. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  14. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  15. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  16. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  17. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  18. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  19. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  20. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  1. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  2. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    Science.gov (United States)

    Dominique, Stephane

    genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  3. Highly efficient parallel direct solver for solving dense complex matrix equations from method of moments

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-03-01

    Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.

  4. Weight Optimisation of Steel Monopile Foundations for Offshore Windfarms

    DEFF Research Database (Denmark)

    Fog Gjersøe, Nils; Bouvin Pedersen, Erik; Kristensen, Brian

    2015-01-01

    The potential for mass reduction of monopiles in offshore windfarms using current design practice is investigated. Optimisation by sensitivity analysis is carried out for the following important parameters: wall thickness distribution between tower and monopile, soil stiffness, damping ratio...

  5. Protection against natural radiation: Optimisation and decision exercises

    International Nuclear Information System (INIS)

    O'Riordan, M.C.

    1984-02-01

    Six easy exercises are presented in which cost-benefit analysis is used to optimise protection against natural radiation or to decide whether protection is appropriate. The exercises are illustrative only and do not commit the Board. (author)

  6. Optimisation of wheat-sprouted soybean flour bread using response ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-11-16

    Nov 16, 2009 ... Full Length Research Paper. Optimisation of ... Victoria A. Jideani1* and Felix C. Onwubali2. 1Department of Food Technology, Cape Peninsula University of Technology, P. O. Box 652, Cape Town 8000, South. Africa.

  7. Distributed optimisation problem with communication delay and external disturbance

    Science.gov (United States)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  8. Share-of-Surplus Product Line Optimisation with Price Levels

    Directory of Open Access Journals (Sweden)

    X. G. Luo

    2014-01-01

    Full Text Available Kraus and Yano (2003 established the share-of-surplus product line optimisation model and developed a heuristic procedure for this nonlinear mixed-integer optimisation model. In their model, price of a product is defined as a continuous decision variable. However, because product line optimisation is a planning process in the early stage of product development, pricing decisions usually are not very precise. In this research, a nonlinear integer programming share-of-surplus product line optimization model that allows the selection of candidate price levels for products is established. The model is further transformed into an equivalent linear mixed-integer optimisation model by applying linearisation techniques. Experimental results in different market scenarios show that the computation time of the transformed model is much less than that of the original model.

  9. Optimising a fall out dust monitoring sampling programme at a ...

    African Journals Online (AJOL)

    GREG

    Key words: Fall out dust monitoring, cement plant, optimising, air pollution sampling, fall out dust sampler locations. .... applied for those areas where controls are in place. Sampling ..... mass balance in the total cement manufacturing process.

  10. Issues with performance measures for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE), Mexico, 20-23 June 2013 Issues with Performance Measures for Dynamic Multi-objective Optimisation Mard´e Helbig CSIR: Meraka Institute Brummeria, South Africa...

  11. Optimisation Study on the Production of Anaerobic Digestate ...

    African Journals Online (AJOL)

    DR. AMIN

    optimise the production of ADC from organic fractions of domestic wastes and the effects of ADC amendments on soil .... (22%), cooked meat (9%), lettuce (11%), carrots. (3%), potato (44%) ... seed was obtained from a mesophilic anaerobic.

  12. Algorithm for optimisation of paediatric chest radiography

    International Nuclear Information System (INIS)

    Kostova-Lefterova, D.

    2016-01-01

    The purpose of this work is to assess the current practice and patient doses in paediatric chest radiography in a large university hospital. The X-ray unit is used in the paediatric department for respiratory diseases. Another purpose was to recommend and apply optimized protocols to reduce patient dose while maintaining diagnostic image quality for the x-ray images. The practice of two different radiographers was studied. The results were compared with the existing practice in paediatric chest radiography and the opportunities for optimization were identified in order to reduce patient doses. A methodology was developed for optimization of the x-ray examinations by grouping children in age groups or according to other appropriate indication and creating an algorithm for proper selection of the exposure parameters for each group. The algorithm for the optimisation of paediatric chest radiography reduced patient doses (PKA, organ dose, effective dose) between 1.5 and 6 times for the different age groups, the average glandular dose up to 10 times and the dose for the lung between 2 and 5 times. The resulting X-ray images were of good diagnostic quality. The subjectivity in the choice of exposure parameters was reduced and standardization has been achieved in the work of the radiographers. The role of the radiologist, the medical physicist and radiographer in the process of optimization was shown. It was proven the effect of teamwork in reducing patient doses at keeping adequate image quality. Key words: Chest Radiography. Paediatric Radiography. Optimization. Radiation Exposure. Radiation Protection

  13. Optimising preterm nutrition: present and future

    LENUS (Irish Health Repository)

    Brennan, Ann-Marie

    2016-04-01

    The goal of preterm nutrition in achieving growth and body composition approximating that of the fetus of the same postmenstrual age is difficult to achieve. Current nutrition recommendations depend largely on expert opinion, due to lack of evidence, and are primarily birth weight based, with no consideration given to gestational age and\\/or need for catch-up growth. Assessment of growth is based predominately on anthropometry, which gives insufficient attention to the quality of growth. The present paper provides a review of the current literature on the nutritional management and assessment of growth in preterm infants. It explores several approaches that may be required to optimise nutrient intakes in preterm infants, such as personalising nutritional support, collection of nutrient intake data in real-time, and measurement of body composition. In clinical practice, the response to inappropriate nutrient intakes is delayed as the effects of under- or overnutrition are not immediate, and there is limited nutritional feedback at the cot-side. The accurate and non-invasive measurement of infant body composition, assessed by means of air displacement plethysmography, has been shown to be useful in assessing quality of growth. The development and implementation of personalised, responsive nutritional management of preterm infants, utilising real-time nutrient intake data collection, with ongoing nutritional assessments that include measurement of body composition is required to help meet the individual needs of preterm infants.

  14. Optimising Boltzmann codes for the PLANCK era

    International Nuclear Information System (INIS)

    Hamann, Jan; Lesgourgues, Julien; Balbi, Amedeo; Quercellini, Claudia

    2009-01-01

    High precision measurements of the Cosmic Microwave Background (CMB) anisotropies, as can be expected from the PLANCK satellite, will require high-accuracy theoretical predictions as well. One possible source of theoretical uncertainty is the numerical error in the output of the Boltzmann codes used to calculate angular power spectra. In this work, we carry out an extensive study of the numerical accuracy of the public Boltzmann code CAMB, and identify a set of parameters which determine the error of its output. We show that at the current default settings, the cosmological parameters extracted from data of future experiments like Planck can be biased by several tenths of a standard deviation for the six parameters of the standard ΛCDM model, and potentially more seriously for extended models. We perform an optimisation procedure that leads the code to achieve sufficient precision while at the same time keeping the computation time within reasonable limits. Our conclusion is that the contribution of numerical errors to the theoretical uncertainty of model predictions is well under control—the main challenges for more accurate calculations of CMB spectra will be of an astrophysical nature instead

  15. Geometrical exploration of a flux-optimised sodium receiver through multi-objective optimisation

    Science.gov (United States)

    Asselineau, Charles-Alexis; Corsi, Clothilde; Coventry, Joe; Pye, John

    2017-06-01

    A stochastic multi-objective optimisation method is used to determine receiver geometries with maximum second law efficiency, minimal average temperature and minimal surface area. The method is able to identify a set of Pareto optimal candidates that show advantageous geometrical features, mainly in being able to maximise the intercepted flux within the geometrical boundaries set. Receivers with first law thermal efficiencies ranging from 87% to 91% are also evaluated using the second law of thermodynamics and found to have similar efficiencies of over 60%, highlighting the influence that the geometry can play in the maximisation of the work output of receivers by influencing the distribution of the flux from the concentrator.

  16. OPTIMISATION OF A DRIVE SYSTEM AND ITS EPICYCLIC GEAR SET

    OpenAIRE

    Bellegarde , Nicolas; Dessante , Philippe; Vidal , Pierre; Vannier , Jean-Claude

    2007-01-01

    International audience; This paper describes the design of a drive consisting of a DC motor, a speed reducer, a lead screw transformation system, a power converter and its associated DC source. The objective is to reduce the mass of the system. Indeed, the volume and weight optimisation of an electrical drive is an important issue for embedded applications. Here, we present an analytical model of the system in a specific application and afterwards an optimisation of the motor and speed reduce...

  17. Multiobjective optimisation of bogie suspension to boost speed on curves

    Science.gov (United States)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  18. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  19. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  20. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  1. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  2. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  3. Optimisation of Lagrangian Flash Flood Microsensors Dropped by Unmanned Aerial Vehicle

    KAUST Repository

    Abdulaal, Mohammed

    2014-05-01

    Abstract Physical Sciences and Engineering Division Mechanical Engineering Department Master of Science Optimisation of Lagrangian Flash Flood Microsensors Dropped by Unmanned Aerial Vehicle by Mohammed Abdulaal Floods are the most common natural disasters, causing thousands of casualties every year in the world. In particular, ash ood events are particularly deadly because of the short timescales on which they occur. Classical sensing solutions such as xed wireless sensor networks or satellite imagery are either too expensive or too inaccurate. Nevertheless, Unmanned Aerial Vehicles equipped with mobile microsensors could be capable of sensing ash oods in real time for a low overall cost, saving lives and greatly improving the e ciency of the emergency response. Using ood simulation data, we show that this system could be used to detect ash oods. We also present an ongoing implementation of this system using 3D printed sensors and sensor delivery systems on a UAV testbed as well as some preliminary results.

  4. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    Science.gov (United States)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  5. Performance evaluation for compressible flow calculations on five parallel computers of different architectures

    International Nuclear Information System (INIS)

    Kimura, Toshiya.

    1997-03-01

    A two-dimensional explicit Euler solver has been implemented for five MIMD parallel computers of different machine architectures in Center for Promotion of Computational Science and Engineering of Japan Atomic Energy Research Institute. These parallel computers are Fujitsu VPP300, NEC SX-4, CRAY T94, IBM SP2, and Hitachi SR2201. The code was parallelized by several parallelization methods, and a typical compressible flow problem has been calculated for different grid sizes changing the number of processors. Their effective performances for parallel calculations, such as calculation speed, speed-up ratio and parallel efficiency, have been investigated and evaluated. The communication time among processors has been also measured and evaluated. As a result, the differences on the performance and the characteristics between vector-parallel and scalar-parallel computers can be pointed, and it will present the basic data for efficient use of parallel computers and for large scale CFD simulations on parallel computers. (author)

  6. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  7. LHCb: Optimising query execution time in LHCb Bookkeeping System using partition pruning and partition wise joins

    CERN Multimedia

    Mathe, Z

    2013-01-01

    The LHCb experiment produces a huge amount of data which has associated metadata such as run number, data taking condition (detector status when the data was taken), simulation condition, etc. The data are stored in files, replicated on the Computing Grid around the world. The LHCb Bookkeeping System provides methods for retrieving datasets based on their metadata. The metadata is stored in a hybrid database model, which is a mixture of Relational and Hierarchical database models and is based on the Oracle Relational Database Management System (RDBMS). The database access has to be reliable and fast. In order to achieve a high timing performance, the tables are partitioned and the queries are executed in parallel. When we store large amounts of data the partition pruning is essential for database performance, because it reduces the amount of data retrieved from the disk and optimises the resource utilisation. This research presented here is focusing on the extended composite partitioning strategy such as rang...

  8. An exergy-based multi-objective optimisation model for energy retrofit strategies in non-domestic buildings

    International Nuclear Information System (INIS)

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul

    2016-01-01

    While the building sector has a significant thermodynamic improvement potential, exergy analysis has been shown to provide new insight for the optimisation of building energy systems. This paper presents an exergy-based multi-objective optimisation tool that aims to assess the impact of a diverse range of retrofit measures with a focus on non-domestic buildings. EnergyPlus was used as a dynamic calculation engine for first law analysis, while a Python add-on was developed to link dynamic exergy analysis and a Genetic Algorithm optimisation process with the aforementioned software. Two UK archetype case studies (an office and a primary school) were used to test the feasibility of the proposed framework. Different measures combinations based on retrofitting the envelope insulation levels and the application of different HVAC configurations were assessed. The objective functions in this study are annual energy use, occupants' thermal comfort, and total building exergy destructions. A large range of optimal solutions was achieved highlighting the framework capabilities. The model achieved improvements of 53% in annual energy use, 51% of exergy destructions and 66% of thermal comfort for the school building, and 50%, 33%, and 80% for the office building. This approach can be extended by using exergoeconomic optimisation. - Highlights: • Integration of dynamic exergy analysis into a retrofit-oriented simulation tool. • Two UK non-domestic building archetypes are used as case studies. • The model delivers non-dominated solutions based on energy, exergy and comfort. • Exergy destructions of ERMs are optimised using GA algorithms. • Strengths and limitations of the proposed exergy-based framework are discussed.

  9. Optimisation of integrated energy and materials systems

    International Nuclear Information System (INIS)

    Gielen, D.J.; Okken, P.A.

    1994-06-01

    To define cost-effective long term CO2 reduction strategies an integrated energy and materials system model for the Netherlands for the period 2000-2040 is developed. The model is based upon the energy system model MARKAL, which configures an optimal mix of technologies to satisfy the specified energy and product/materials service demands. This study concentrates on CO 2 emission reduction in the materials system. For this purpose, the energy system model is enlarged with a materials system model including all steps 'from cradle to grave'. The materials system model includes 29 materials, 20 product groups and 30 waste materials. The system is divided into seven types of technologies; 250 technologies are modeled. The results show that the integrated optimisation of the energy system and the materials system can significantly reduce the emission reduction costs, especially at higher reduction percentages. The reduction is achieved through shifts in materials production and waste handling and through materials substitution in products. Shifts in materials production and waste management seem cost-effective, while the cost-effectiveness of shifts in product composition is sensitive due to the cost structure of products. For the building sector, transportation applications and packaging, CO 2 policies show a significant impact on prices, and shifts in product composition could occur. For other products, the reduction through materials substitution seems less promising. The impact on materials consumption seems most significant for cement (reduced), timber and aluminium (both increased). For steel and plastics, the net effect is balanced, but shifts between applications do occur. The MARKAL-approach is feasible to study integrated energy and materials systems. The progress compared to other environmental system analysis instruments is much more insight in the interaction of technologies on a national scale and in time

  10. An approach to next step device optimisation

    International Nuclear Information System (INIS)

    Salpietro, E.

    2000-01-01

    The requirements for ITER EDA were to achieve ignition with a good safety margin, and controlled long inductive burn. These requirements lead to a big device, which requested a too ambitious step to be undertaken by the world fusion community. More realistic objectives for a next step device shall be to demonstrate the net production of energy with a high energy gain factor (Q) and a high boot strap current fraction (>60%) which is required for a Fusion Power Plant (FPP). The Next Step Device (NSD) shall also allow operation flexibility in order to explore a large range of plasma parameters to find out the optimum concept for the fusion power plant prototype. These requirements could be too demanding for one single device and could probably be better explored in a strongly integrated world programme. The cost of one or more devices is the decisive factor for the choice of the fusion power development programme strategy. The plasma elongation and triangularity have a strong impact in the cost of the device and are limited by the plasma vertical position control issue. The distance between plasma separatrix and the toroidal field conductor does not vary a lot between devices. It is determined by the sum of the distance between first wall-plasma sepratrix and the thickness of the nuclear shield required to protect the toroidal field coil insultation. The thickness of the TF coil is determined by the allowable stresses and superconducting characteristics. The outer radius of the central solenoid is the result of an optimisation to provide the magnetic flux to inductively drive the plasma. Therefore, in order to achieve the objectives for Q and boot-strap current fractions at the minimum cost, the plasma aspect ratio and magnetic field value shall be determined. The paper will present the critical issues for the next device and will make considerations on the optimal way to proceed towards the realisation of the fusion power plant

  11. Optimisation of material discrimination using spectral CT

    International Nuclear Information System (INIS)

    Nik, S.J.; Meyer, J.; Watts, R.

    2010-01-01

    Full text: Spectral computed tomography (CT) using novel X-ray photon counting detectors (PCDs) with energy resolving capabilities is capable of providing energy-selective images. This extra energy information may allow materials such as iodine and calcium, or water and fat to be distinguished. PCDs have energy thresholds, enabling the classification of photons into multiple energy bins. The inform tion content of spectral CT images depends on how the photons are grouped together. [n this work, a method is presented to optimise energy windows for maximum material discrimination. Given a combination of thicknesses, the reference number of expected photons in each energy bin is computed using the Bee Lambert equation. A similar calculation is performed for an exhaustive range of thicknesses and the number of photons in each case is com pared to the reference, allowing a statistical map of the uncertainty in thickness parameters to be constructed. The 63%-confidence region in the two-dimensional thickness space is a representation of how optimal the bins are for material separation. The model is demonstrated with 0.1 mm of iodine and 2.2 mm of calcium using two adjacent bins encompassing the entire energy range. Bins bordering at the iodine k-edge of 33.2 keY are found to be optimal. When compared to two abutted energy bins with equal incident counts as used in the literature (bordering at 54 keY), the thickness uncertainties are reduced from approximately 4% to less than I % (see Figure). This approach has been developed for two materials and is expandable to an arbitrary number of materials and bins.

  12. Future aircraft cabins and design thinking: optimisation vs. win-win scenarios

    Directory of Open Access Journals (Sweden)

    A. Hall

    2013-06-01

    requirements using experiences from the A350 XWB and future cabin design concepts. In particular the paper explores the value of implementing design thinking insights in engineering practice and discusses the relative merits of decisions based on optimisation versus win-win scenarios for aircraft cabin design and wider applications in aerospace environments. The increasing densification of technological opportunities and shifting consumer demand coupled with highly complex systems may ultimately challenge our ability to make decisions based on optimisation balances. From an engineering design perspective optimisation tends to preclude certain strategies that deliver high quality results in consumer scenarios whereas win-win solutions may face challenges in complex technical environments.

  13. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  14. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  15. Directions in parallel processor architecture, and GPUs too

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Modern computing is power-limited in every domain of computing. Performance increments extracted from instruction-level parallelism (ILP) are no longer power-efficient; they haven't been for some time. Thread-level parallelism (TLP) is a more easily exploited form of parallelism, at the expense of programmer effort to expose it in the program. In this talk, I will introduce you to disparate topics in parallel processor architecture that will impact programming models (and you) in both the near and far future. About the speaker Olivier is a senior GPU (SM) architect at NVIDIA and an active participant in the concurrency working group of the ISO C++ committee. He has also worked on very large diesel engines as a mechanical engineer, and taught at McGill University (Canada) as a faculty instructor.

  16. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  17. Massive hybrid parallelism for fully implicit multiphysics

    International Nuclear Information System (INIS)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-01-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  18. Massive hybrid parallelism for fully implicit multiphysics

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  19. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  20. Pure random search for ambient sensor distribution optimisation in a smart home environment.

    Science.gov (United States)

    Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming

    2011-01-01

    Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.

  1. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  2. High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm

    Directory of Open Access Journals (Sweden)

    Dieter Hendricks

    2016-02-01

    Full Text Available We implement a master-slave parallel genetic algorithm with a bespoke log-likelihood fitness function to identify emergent clusters within price evolutions. We use graphics processing units (GPUs to implement a parallel genetic algorithm and visualise the results using disjoint minimal spanning trees. We demonstrate that our GPU parallel genetic algorithm, implemented on a commercially available general purpose GPU, is able to recover stock clusters in sub-second speed, based on a subset of stocks in the South African market. This approach represents a pragmatic choice for low-cost, scalable parallel computing and is significantly faster than a prototype serial implementation in an optimised C-based fourth-generation programming language, although the results are not directly comparable because of compiler differences. Combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification, the proposed implementation offers cost-effective, near-real-time risk assessment for financial practitioners.

  3. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  4. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  5. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  6. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  7. A methodological approach to the design of optimising control strategies for sewer systems

    DEFF Research Database (Denmark)

    Mollerup, Ane Loft; Mikkelsen, Peter Steen; Sin, Gürkan

    2016-01-01

    This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters. Accordin......This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters....... Accordingly, two novel optimisation configurations are developed, where the optimisation either acts on the actuators or acts on the regulatory control layer. These two optimisation designs are evaluated on a sub-catchment of the sewer system in Copenhagen, and found to perform better than the existing...

  8. Energy-Aware Software Engineering

    DEFF Research Database (Denmark)

    Eder, Kerstin; Gallagher, John Patrick

    2017-01-01

    A great deal of energy in Information and Communication Technology (ICT) systems can be wasted by software, regardless of how energy-efficient the underlying hardware is. To avoid such waste, programmers need to understand the energy consumption of programs during the development process rather......, the chapter discusses how energy analysis and modelling techniques can be incorporated in software engineering tools, including existing compilers, to assist the energy-aware programmer to optimise the energy consumption of code....

  9. Transmit Power Optimisation in Wireless Network

    Directory of Open Access Journals (Sweden)

    Besnik Terziu

    2011-09-01

    Full Text Available Transmit power optimisation in wireless networks based on beamforming have emerged as a promising technique to enhance the spectrum efficiency of present and future wireless communication systems. The aim of this study is to minimise the access point power consumption in cellular networks while maintaining a targeted quality of service (QoS for the mobile terminals. In this study, the targeted quality of service is delivered to a mobile station by providing a desired level of Signal to Interference and Noise Ratio (SINR. Base-stations are coordinated across multiple cells in a multi-antenna beamforming system. This study focuses on a multi-cell multi-antenna downlink scenario where each mobile user is equipped with a single antenna, but where multiple mobile users may be active simultaneously in each cell and are separated via spatial multiplexing using beamforming. The design criteria is to minimize the total weighted transmitted power across the base-stations subject to SINR constraints at the mobile users. The main contribution of this study is to define an iterative algorithm that is capable of finding the joint optimal beamformers for all basestations, based on a correlation-based channel model, the full-correlation model. Among all correlated channel models, the correlated channel model used in this study is the most accurate, giving the best performance in terms of power consumption. The environment here in this study is chosen to be Non-Light of- Sight (NLOS condition, where a signal from a wireless transmitter passes several obstructions before arriving at a wireless receiver. Moreover there are many scatterers local to the mobile, and multiple reflections can occur among them before energy arrives at the mobile. The proposed algorithm is based on uplink-downlink duality using the Lagrangian duality theory. Time-Division Duplex (TDD is chosen as the platform for this study since it has been adopted to the latest technologies in Fourth

  10. Cellular internalisation kinetics and cytotoxic properties of statistically designed and optimised neo-geometric copper nanocrystals.

    Science.gov (United States)

    Murugan, Karmani; Choonara, Yahya E; Kumar, Pradeep; du Toit, Lisa C; Pillay, Viness

    2017-09-01

    This study aimed to highlight a statistic design to precisely engineer homogenous geometric copper nanoparticles (CuNPs) for enhanced intracellular drug delivery as a function of geometrical structure. CuNPs with a dual functionality comprising geometric attributes for enhanced cell uptake and exerting cytotoxic activity on proliferating cells were synthesized as a novel drug delivery system. This paper investigated the defined concentrations of two key surfactants used in the reaction to mutually control and manipulate nano-shape and optimisation of the geometric nanosystems. A statistical experimental design comprising a full factorial model served as a refining factor to achieve homogenous geometric nanoparticles using a one-pot method for the systematic optimisation of the geometric CuNPs. Shapes of the nanoparticles were investigated to determine the result of the surfactant variation as the aim of the study and zeta potential was studied to ensure the stability of the system and establish a nanosystem of low aggregation potential. After optimisation of the nano-shapes, extensive cellular internalisation studies were conducted to elucidate the effect of geometric CuNPs on uptake rates, in addition to the vital toxicity assays to further understand the cellular effect of geometric CuNPs as a drug delivery system. In addition to geometry; volume, surface area, orientation to the cell membrane and colloidal stability is also addressed. The outcomes of the study demonstrated the success of homogenous geometric NP formation, in addition to a stable surface charge. The findings of the study can be utilized for the development of a drug delivery system for promoted cellular internalisation and effective drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems

    Science.gov (United States)

    Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.

    2015-05-01

    Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.

  12. Mutual information-based LPI optimisation for radar network

    Science.gov (United States)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  13. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  14. A conceptual optimisation strategy for radiography in a digital environment

    International Nuclear Information System (INIS)

    Baath, M.; Haakansson, M.; Hansson, J.; Maansson, L. G.

    2005-01-01

    Using a completely digital environment for the entire imaging process leads to new possibilities for optimisation of radiography since many restrictions of screen/film systems, such as the small dynamic range and the lack of possibilities for image processing, do not apply any longer. However, at the same time these new possibilities lead to a more complicated optimisation process, since more freedom is given to alter parameters. This paper focuses on describing an optimisation strategy that concentrates on taking advantage of the conceptual differences between digital systems and screen/film systems. The strategy can be summarised as: (a) always include the anatomical background during the optimisation, (b) perform all comparisons at a constant effective dose and (c) separate the image display stage from the image collection stage. A three-step process is proposed where the optimal setting of the technique parameters is determined at first, followed by an optimisation of the image processing. In the final step the optimal dose level - given the optimal settings of the image collection and image display stages - is determined. (authors)

  15. Establishing Local Reference Dose Values and Optimisation Strategies

    International Nuclear Information System (INIS)

    Connolly, P.; Moores, B.M.

    2000-01-01

    The revised EC Patient Directive 97/43 EURATOM introduces the concepts of clinical audit, diagnostic reference levels and optimisation of radiation protection in diagnostic radiology. The application of reference dose levels in practice involves the establishment of reference dose values as actual measurable operational quantities. These values should then form part of an ongoing optimisation and audit programme against which routine performance can be compared. The CEC Quality Criteria for Radiographic Images provides guidance reference dose values against which local performance can be compared. In many cases these values can be improved upon quite considerably. This paper presents the results of a local initiative in the North West of the UK aimed at establishing local reference dose values for a number of major hospital sites. The purpose of this initiative is to establish a foundation for both optimisation strategies and clinical audit as an ongoing and routine practice. The paper presents results from an ongoing trial involving patient dose measurements for several radiological examinations upon the sites. The results of an attempt to establish local reference dose values from measured dose values and to employ them in optimisation strategies are presented. In particular emphasis is placed on the routine quality control programmes necessary to underpin this strategy including the effective data management of results from such programmes and how they can be employed to optimisation practices. (author)

  16. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  17. Optimising Transport in a Homogeneous Network

    OpenAIRE

    WEAIRE, DENIS LAWRENCE

    2004-01-01

    PUBLISHED Many situations in physics, biology, and engineering consist of the transport of some physical quantity through a network of narrow channels. The ability of a network to transport such a quantity in every direction can be described by the average conductivity associated with it. When the flow through each channel is conserved and derives from a potential function, we show that there exists an upper bound of the average conductivity and explicitly give the expression f...

  18. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Data for TROTS – The Radiotherapy Optimisation Test Set

    Directory of Open Access Journals (Sweden)

    Sebastiaan Breedveld

    2017-06-01

    Full Text Available The Radiotherapy Optimisation Test Set (TROTS is an extensive set of problems originating from radiotherapy (radiation therapy treatment planning. This dataset is created for 2 purposes: (1 to supply a large-scale dense dataset to measure performance and quality of mathematical solvers, and (2 to supply a dataset to investigate the multi-criteria optimisation and decision-making nature of the radiotherapy problem. The dataset contains 120 problems (patients, divided over 6 different treatment protocols/tumour types. Each problem contains numerical data, a configuration for the optimisation problem, and data required to visualise and interpret the results. The data is stored as HDF5 compatible Matlab files, and includes scripts to work with the dataset.

  20. CLIC crab cavity design optimisation for maximum luminosity

    Energy Technology Data Exchange (ETDEWEB)

    Dexter, A.C., E-mail: a.dexter@lancaster.ac.uk [Lancaster University, Lancaster, LA1 4YR (United Kingdom); Cockcroft Institute, Daresbury, Warrington, WA4 4AD (United Kingdom); Burt, G.; Ambattu, P.K. [Lancaster University, Lancaster, LA1 4YR (United Kingdom); Cockcroft Institute, Daresbury, Warrington, WA4 4AD (United Kingdom); Dolgashev, V. [SLAC, Menlo Park, CA 94025 (United States); Jones, R. [University of Manchester, Manchester, M13 9PL (United Kingdom)

    2011-11-21

    The bunch size and crossing angle planned for CERN's compact linear collider CLIC dictate that crab cavities on opposing linacs will be needed to rotate bunches of particles into alignment at the interaction point if the desired luminosity is to be achieved. Wakefield effects, RF phase errors between crab cavities on opposing linacs and unpredictable beam loading can each act to reduce luminosity below that anticipated for bunches colliding in perfect alignment. Unlike acceleration cavities, which are normally optimised for gradient, crab cavities must be optimised primarily for luminosity. Accepting the crab cavity technology choice of a 12 GHz, normal conducting, travelling wave structure as explained in the text, this paper develops an analytical approach to optimise cell number and iris diameter.

  1. Cultural-based particle swarm for dynamic optimisation problems

    Science.gov (United States)

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  2. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  3. The principle of optimisation: reasons for success and legal criticism

    International Nuclear Information System (INIS)

    Fernandez Regalado, Luis

    2008-01-01

    The International Commission on Radiological Protection (ICRP) has adopted new recommendations in 2007. In broad outlines they fundamentally continue the recommendations already approved in 1990 and later on. The principle of optimisation of protection, together with the principles of justification and dose limits, remains playing a key role of the ICRP recommendations, and it has so been for the last few years. This principle, somehow reinforced in the 2007 ICRP recommendations, has been incorporated into norms and legislation which have peacefully been in force in many countries all over the world. There are three main reasons to explain the success in the application of the principle of optimisation in radiological protection: First, the subjectivity of the sentence that embraces the principle of optimisation, 'As low as reasonably achievable' (ALARA), that allows different valid interpretations under different circumstances. Second, the pragmatism and adaptability of ALARA to all exposure situations. And third, the scientific humbleness which is behind the principle of optimisation, which makes a clear contrast with the old fashioned scientific positivism that enshrined scientist opinions. Nevertheless, from a legal point of view, there is some criticism cast over the principle of optimisation in radiological protection, where it has been transformed in compulsory norm. This criticism is based on two arguments: The lack of democratic participation in the process of elaboration of the norm, and the legal uncertainty associated to its application. Both arguments are somehow known by the ICRP which, on the one hand, has broadened the participation of experts, associations and the professional radiological protection community, increasing the transparency on how decisions on recommendations have been taken, and on the other hand, the ICRP has warned about the need for authorities to specify general criteria to develop the principle of optimisation in national

  4. Revisiting EOR Projects in Indonesia through Integrated Study: EOR Screening, Predictive Model, and Optimisation

    KAUST Repository

    Hartono, A. D.; Hakiki, Farizal; Syihab, Z.; Ambia, F.; Yasutra, A.; Sutopo, S.; Efendi, M.; Sitompul, V.; Primasari, I.; Apriandi, R.

    2017-01-01

    EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.

  5. Process optimisation in waste combustion and gasification; Prozessoptimierung bei der Verbrennung und Vergasung von Abfaellen

    Energy Technology Data Exchange (ETDEWEB)

    Born, M. [Technische Univ. Bergakademie Freiberg, Inst. IEC, Fakultaet 4 (Germany)

    1998-09-01

    Optimisation of thermal treatment processes is chiefly geared to the following aims: in terms of process engineering, to the homogenisation of input materials, improvement of process effectivity (increased reaction rates), intensification of mixing and exploitation of residence time (approximation to thermodynamic equilibria); in ecological terms, to the minimisation of material flows and pollutant generation and limitation of emissions; and in economic terms to the simplification of process techniques, maximisation of net energy production, and minimisation of the quantity and pollutant content of arising wastes. The present contribution takes a closer look at some of these ways of optimisation. [Deutsch] Die Optimierung der thermischen Behandlungsprozesse wird vor allem mit folgenden Zielstellung durchgefuehrt: - verfahrenstechnisch durch - Homogenisierung der Input-Materialien, Verbesserung der Effektivitaet der Prozesse (Erhoehung der Reaktionsgeschwindigkeit), Intensivierung der Mischung und Nutzung der Verweilzeit (Annaeherung an thermodynamische Gleichgewichte). - Oekologisch durch - Minimierung der Stoffstroeme, Minimierung der Schadstoffentstehung, Begrenzung der Emissionen. - Oekonomisch durch - Vereinfachung der Verfahrenstechnik, Maximierung der Nettoenergieproduktion, Minimierung der Mengen und Schadstoffgehalte entstehender Abfaelle. In den folgenden Ausfuehrungen sollen einige dieser Optimierungsmoeglichkeiten naeher betrachtet werden. (orig./SR)

  6. Revisiting EOR Projects in Indonesia through Integrated Study: EOR Screening, Predictive Model, and Optimisation

    KAUST Repository

    Hartono, A. D.

    2017-10-17

    EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.

  7. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  8. Numerical optimisation of friction stir welding: review of future challenges

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Hattel, Jesper Henri

    2011-01-01

    During the last decade, the combination of increasingly more advanced numerical simulation software with high computational power has resulted in models for friction stir welding (FSW), which have improved the understanding of the determining physical phenomena behind the process substantially....... This has made optimisation of certain process parameters possible and has in turn led to better performing friction stir welded products, thus contributing to a general increase in the popularity of the process and its applications. However, most of these optimisation studies do not go well beyond manual...

  9. Alternatives for optimisation of rumen fermentation in ruminants

    Directory of Open Access Journals (Sweden)

    T. Slavov

    2017-06-01

    Full Text Available Abstract. The proper knowledge on the variety of events occurring in the rumen makes possible their optimisation with respect to the complete feed conversion and increasing the productive performance of ruminants. The inclusion of various dietary additives (supplements, biologically active substances, nutritional antibiotics, probiotics, enzymatic preparations, plant extracts etc. has an effect on the intensity and specific pathway of fermentation, and thus, on the general digestion and systemic metabolism. The optimisation of rumen digestion is a method with substantial potential for improving the efficiency of ruminant husbandry, increasing of quality of their produce and health maintenance.

  10. Separative power of an optimised concurrent gas centrifuge

    Energy Technology Data Exchange (ETDEWEB)

    Bogovalov, Sergey; Boman, Vladimir [National Research Nuclear University (MEPHI), Moscow (Russian Federation)

    2016-06-15

    The problem of separation of isotopes in a concurrent gas centrifuge is solved analytically for an arbitrary binary mixture of isotopes. The separative power of the optimised concurrent gas centrifuges for the uranium isotopes equals to δU = 12.7 (V/700 m/s)2(300 K/T)(L/1 m) kg·SWU/yr, where L and V are the length and linear velocity of the rotor of the gas centrifuge and T is the temperature. This equation agrees well with the empirically determined separative power of optimised counter-current gas centrifuges.

  11. MANAGEMENT OPTIMISATION OF MASS CUSTOMISATION MANUFACTURING USING COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Louwrens Butler

    2018-05-01

    Full Text Available Computational intelligence paradigms can be used for advanced manufacturing system optimisation. A static simulation model of an advanced manufacturing system was developed in order to simulate a manufacturing system. The purpose of this advanced manufacturing system was to mass-produce a customisable product range at a competitive cost. The aim of this study was to determine whether this new algorithm could produce a better performance than traditional optimisation methods. The algorithm produced a lower cost plan than that for a simulated annealing algorithm, and had a lower impact on the workforce.

  12. Optimising a shaft's geometry by applying genetic algorithms

    Directory of Open Access Journals (Sweden)

    María Alejandra Guzmán

    2005-05-01

    Full Text Available Many engnieering design tasks involve optimising several conflicting goals; these types of problem are known as Multiobjective Optimisation Problems (MOPs. Evolutionary techniques have proved to be an effective tool for finding solutions to these MOPs during the last decade, Variations on the basic generic algorithm have been particulary proposed by different researchers for finding rapid optimal solutions to MOPs. The NSGA (Non-dominated Sorting Generic Algorithm has been implemented in this paper for finding an optimal design for a shaft subjected to cyclic loads, the conflycting goals being minimum weight and minimum lateral deflection.

  13. The optimisation study of tbp synthesis process by phosphoric acid

    International Nuclear Information System (INIS)

    Amedjkouh, A.; Attou, M.; Azzouz, A.; Zaoui, B.

    1995-07-01

    The present work deals with the optimisation study of TBP synthesis process by phosphoric acid. This way of synthesis is more advantageous than POCL3 or P2O5 as phosphatant agents. these latters are toxic and dangerous for the environnement. The optimisation study is based on a series of 16 experiences taking into account the range of variation of the following parameters : temperature, pressure, reagents mole ratio, promoter content. the yield calculation is based on the randomisation of an equation including all parameters. the resolution of this equation gave a 30% TBP molar ratio. this value is in agreement with that of experimental data

  14. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  15. The optimisation of wedge filters in radiotherapy of the prostate

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony J.; Webb, Steve

    1995-01-01

    A treatment plan optimisation algorithm has been applied to 12 patients with early prostate cancer in order to determine the optimum beam-weights and wedge angles for a standard conformal three-field treatment technique. The optimisation algorithm was based on fast-simulated-annealing using a cost function designed to achieve a uniform dose in the planning-target-volume (PTV) and to minimise the integral doses to the organs-at-risk. The algorithm has been applied to standard conformal three-field plans created by an experienced human planner, and run in three PLAN MODES: (1) where the wedge angles were fixed by the human planner and only the beam-weights were optimised; (2) where both the wedge angles and beam-weights were optimised; and (3) where both the wedge angles and beam-weights were optimised and a non-uniform dose was prescribed to the PTV. In the latter PLAN MODE, a uniform 100% dose was prescribed to all of the PTV except for that region that overlaps with the rectum where a lower (e.g., 90%) dose was prescribed. The resulting optimised plans have been compared with those of the human planner who found beam-weights by conventional forward planning techniques. Plans were compared on the basis of dose statistics, normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results of the comparison showed that all three PLAN MODES produced plans with slightly higher TCP for the same rectal NTCP, than the human planner. The best results were observed for PLAN MODE 3, where an average increase in TCP of 0.73% (± 0.20, 95% confidence interval) was predicted by the biological models. This increase arises from a beneficial dose gradient which is produced across the tumour. Although the TCP gain is small it comes with no increase in treatment complexity, and could translate into increased cures given the large numbers of patients being referred. A study of the beam-weights and wedge angles chosen by the optimisation algorithm revealed

  16. A high-speed linear algebra library with automatic parallelism

    Science.gov (United States)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  17. Time varying acceleration coefficients particle swarm optimisation (TVACPSO): A new optimisation algorithm for estimating parameters of PV cells and modules

    International Nuclear Information System (INIS)

    Jordehi, Ahmad Rezaee

    2016-01-01

    Highlights: • A modified PSO has been proposed for parameter estimation of PV cells and modules. • In the proposed modified PSO, acceleration coefficients are changed during run. • The proposed modified PSO mitigates premature convergence problem. • Parameter estimation problem has been solved for both PV cells and PV modules. • The results show that proposed PSO outperforms other state of the art algorithms. - Abstract: Estimating circuit model parameters of PV cells/modules represents a challenging problem. PV cell/module parameter estimation problem is typically translated into an optimisation problem and is solved by metaheuristic optimisation problems. Particle swarm optimisation (PSO) is considered as a popular and well-established optimisation algorithm. Despite all its advantages, PSO suffers from premature convergence problem meaning that it may get trapped in local optima. Personal and social acceleration coefficients are two control parameters that, due to their effect on explorative and exploitative capabilities, play important roles in computational behavior of PSO. In this paper, in an attempt toward premature convergence mitigation in PSO, its personal acceleration coefficient is decreased during the course of run, while its social acceleration coefficient is increased. In this way, an appropriate tradeoff between explorative and exploitative capabilities of PSO is established during the course of run and premature convergence problem is significantly mitigated. The results vividly show that in parameter estimation of PV cells and modules, the proposed time varying acceleration coefficients PSO (TVACPSO) offers more accurate parameters than conventional PSO, teaching learning-based optimisation (TLBO) algorithm, imperialistic competitive algorithm (ICA), grey wolf optimisation (GWO), water cycle algorithm (WCA), pattern search (PS) and Newton algorithm. For validation of the proposed methodology, parameter estimation has been done both for

  18. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  19. CTF3 Drive Beam Injector Optimisation

    CERN Document Server

    AUTHOR|(CDS)2082899; Doebert, S

    2015-01-01

    In the Compact Linear Collider (CLIC) the RF power for the acceleration of the Main Beam is extracted from a high-current Drive Beam that runs parallel to the main linac. The main feasibility issues of the two-beam acceleration scheme are being demonstrated at CLIC Test Facility 3 (CTF3). The CTF3 Drive Beam injector consists of a thermionic gun followed by the bunching system and two accelerating structures all embedded in solenoidal magnetic field and a magnetic chicane. Three sub-harmonic bunchers (SHB), a prebuncher and a travelling wave buncher constitute the bunching system. The phase coding process done by the sub-harmonic bunching system produces unwanted satellite bunches between the successive main bunches. The beam dynamics of the CTF3 Drive Beam injector is reoptimised with the goal of improving the injector performance and in particular decreasing the satellite population, the beam loss in the magnetic chicane and the beam emittance in transverse plane compare to the original model based on P. Ur...

  20. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  1. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  2. Agent-Based Decision Control—How to Appreciate Multivariate Optimisation in Architecture

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas Holmer; Kolarik, Jakub

    2015-01-01

    , the method is applied to a multivariate optimisation problem. The aim is specifically to demonstrate optimisation for entire building energy consumption, daylight distribution and capital cost. Based on the demonstrations Moth’s ability to find local minima is discussed. It is concluded that agent-based...... in the early design stage. The main focus is to demonstrate the optimisation method, which is done in two ways. Firstly, the newly developed agent-based optimisation algorithm named Moth is tested on three different single objective search spaces. Here Moth is compared to two evolutionary algorithms. Secondly...... optimisation algorithms like Moth open up for new uses of optimisation in the early design stage. With Moth the final outcome is less dependent on pre- and post-processing, and Moth allows user intervention during optimisation. Therefore, agent-based models for optimisation such as Moth can be a powerful...

  3. Solving dynamic multi-objective problems with vector evaluated particle swarm optimisation

    CSIR Research Space (South Africa)

    Greeff, M

    2008-06-01

    Full Text Available Many optimisation problems are multi-objective and change dynamically. Many methods use a weighted average approach to the multiple objectives. This paper introduces the usage of the vector evaluated particle swarm optimiser (VEPSO) to solve dynamic...

  4. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  5. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  6. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  7. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Science.gov (United States)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm

  8. Optimising operations - Baking energetically 'clean' bread; Energetisch 'saubere' Broetchen backen

    Energy Technology Data Exchange (ETDEWEB)

    Uetz, R.; Krischat, J. [Amstein und Walthert AG, Zuerich (Switzerland)

    2010-07-01

    This article discusses how an energy analysis has delivered quickly realisable, profitable measures that can be taken. The article discusses how the Swiss Jowa bakery company with several bakeries at various locations in Switzerland reduced its CO{sub 2} emissions by around 1,000 tonnes per year, and, at the same time, improved the efficiency of its building technical services. The energy infrastructures of ten bakeries were analysed and recommendations made in the areas of heating, process heat, process steam, compressed air, refrigeration and air-conditioning, water and electricity. The details of the analysis and the realisation of the various measures proposed by the engineering consultants who carried out the study are presented and discussed. Several measures could be applied at several locations, whereby, initially, those measures with a pay-back time of up to two years were implemented. The optimisation process was carried out using simple means and is on-going.

  9. Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation

    Science.gov (United States)

    Liu, Qiang; Wang, Chengen

    2012-08-01

    Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.

  10. An optimisation approach for capacity planning: modelling insights and empirical findings from a tactical perspective

    Directory of Open Access Journals (Sweden)

    Andréa Nunes Carvalho

    2017-09-01

    Full Text Available Abstract The academic literature presents a research-practice gap on the application of decision support tools to address tactical planning problems in real-world organisations. This paper addresses this gap and extends a previous action research relative to an optimisation model applied for tactical capacity planning in an engineer-to-order industrial setting. The issues discussed herein raise new insights to better understand the practical results that can be achieved through the proposed model. The topics presented include the modelling of objectives, the representation of the production process and the costing approach, as well as findings regarding managerial decisions and the scope of action considered. These insights may inspire ideas to academics and practitioners when developing tools for capacity planning problems in similar contexts.

  11. SINGLE FIXED CRANE OPTIMISATION WITHIN A DISTRIBUTION CENTRE

    Directory of Open Access Journals (Sweden)

    J. Matthews

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper considersthe optimisation of the movement of a fixed crane operating in a single aisle of a distribution centre. The crane must move pallets in inventory between docking bays, storage locations, and picking lines. Both a static and a dynamic approach to the problem are presented. The optimisation is performed by means of tabu search, ant colony metaheuristics,and hybrids of these two methods. All these solution approaches were tested on real life data obtained from an operational distribution centre. Results indicate that the hybrid methods outperform the other approaches.

    AFRIKAANSE OPSOMMING: Die optimisering van die beweging van 'n vaste hyskraan in 'n enkele gang van 'n distribusiesentrum word in hierdie artikel beskou. Die hyskraan moet pallette vervoer tussen dokhokke, stoorposisies, en opmaaklyne. Beide 'n statiese en 'n dinamiese benadering tot die probleem word aangebied. Die optimisering word gedoen met behulp van tabu-soektogte, mierkolonieoptimisering,en hibriede van hierdie twee metodes. Al die oplossingsbenaderings is getoets met werklike data wat van 'n operasionele distribusiesentrum verkry is. Die resultate toon aan dat die hibriedmetodes die beste oplossings lewer.

  12. Formulation des betons autopla~ants : Optimisation du squelette ...

    African Journals Online (AJOL)

    Formulation des betons autopla~ants : Optimisation du squelette granulaire par la methode graphique de Dreux - Gorisse. Fatiha Boumaza - Zeraoulia* & Mourad Behim. Laboratoire Materiaux, Geo - Materiaux et Environnement - Departement de Genie Civil. Universite Badji Mokhtar Annaba - BP 12, 23000 Annaba - ...

  13. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  14. Development of an Optimised Losartan Potassium Press-Coated ...

    African Journals Online (AJOL)

    The optimised formulation was further characterized with Fourier-transform infrared spectroscopy (FTIR) and powder X-ray diffractometry (PXRD) to investigate any drug/excipient modifications/interactions. Results: The tensile strength values of all the PCT were between 1.12 and 1.23MNm-2 and friability was < 0.36 %.

  15. Statistical Optimisation of Fermentation Conditions for Citric Acid ...

    African Journals Online (AJOL)

    This study investigated the optimisation of fermentation conditions during citric acid production via solid state fermentation (SSF) of pineapple peels using Aspergillus niger. A three-variable, three-level Box-Behnken design (BBD) comprising 17 experimental runs was used to develop a statistical model for the fermentation ...

  16. Optimising the Blended Learning Environment: The Arab Open University Experience

    Science.gov (United States)

    Hamdi, Tahrir; Abu Qudais, Mohammed

    2018-01-01

    This paper will offer some insights into possible ways to optimise the blended learning environment based on experience with this modality of teaching at Arab Open University/Jordan branch and also by reflecting upon the results of several meta-analytical studies, which have shown blended learning environments to be more effective than their face…

  17. Larval feeding inhibition assay – need for optimisation

    DEFF Research Database (Denmark)

    Azuhnwi, Blasius; Desrues, O.; Hoste, H.

    2013-01-01

    for this observed variation in results include: parasite (species/strain); material tested; or season. There is thus need to optimise LFIA to permit intra and inter-laboratory comparison of results. We investigate here, if changes in EC50 values occur over the patency phase of a nematode species using two test...

  18. A comparative study of marriage in honey bees optimisation (MBO ...

    African Journals Online (AJOL)

    2012-02-15

    Feb 15, 2012 ... In a typical mating, the queen mates with 7 to 20 drones. Each time the .... Honey bee mating optimisation model's pseudo-code ... for this analysis, which consists of 47 years of monthly time ... tive of Karkheh Reservoir is to control and regulate the flow of ..... Masters thesis, Maastricht University, Maastricht.

  19. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  20. Optimisation of dialysate flow in on-line hemodiafiltration

    Directory of Open Access Journals (Sweden)

    Francisco Maduell

    2015-09-01

    Conclusion: Qd variations in OL-HDF do not change convective volume. A higher Qd was associated to a slightly increased urea clearance with no change being observed for medium and large molecules. Qd optimisation to the minimal level assuring an adequate dialysis dose and allowing water and dialysate use to be rationalised should be recommended.

  1. Day-ahead economic optimisation of energy storage

    NARCIS (Netherlands)

    Lampropoulos, I.; Garoufalis, P.; Bosch, van den P.P.J.; Groot, de R.J.W.; Kling, W.L.

    2014-01-01

    This article addresses the day-ahead economic optimisation of energy storage systems within the setting of electricity spot markets. The case study is about a lithium-ion battery system integrated in a low voltage distribution grid with residential customers and photovoltaic generation in the

  2. Optimisation of the image resolution of a positron emission tomograph

    International Nuclear Information System (INIS)

    Ziemons, K.

    1993-10-01

    The resolution and the respective signal-to-noise ratios of reconstructed pictures were a point of main interest of the work for optimisation of PET systems. Monte-Carlo modelling calculations were applied to derive possible improvements of the technical design or performance of the PET system. (DG) [de

  3. FISHRENT; Bio-economic simulation and optimisation model

    NARCIS (Netherlands)

    Salz, P.; Buisman, F.C.; Soma, K.; Frost, H.; Accadia, P.; Prellezo, R.

    2011-01-01

    Key findings: The FISHRENT model is a major step forward in bio-economic model-ling, combining features that have not been fully integrated in earlier models: 1- Incorporation of any number of species (or stock) and/or fleets 2- Integration of simulation and optimisation over a period of 25 years 3-

  4. Design of optimised backstepping controller for the synchronisation

    Indian Academy of Sciences (India)

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master–slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm ...

  5. Plant-wide performance optimisation – The refrigeration system case

    DEFF Research Database (Denmark)

    Green, Torben; Razavi-Far, Roozbeh; Izadi-Zamanabadi, Roozbeh

    2012-01-01

    applicationsin the process industry. The paper addresses the fact that dynamic performance of the system is important, to ensure optimal changes between different operation conditions. To enable optimisation of the dynamic controller behaviour a method for designing the required excitation signal is presented...

  6. Preconditioned stochastic gradient descent optimisation for monomodal image registration

    NARCIS (Netherlands)

    Klein, S.; Staring, M.; Andersson, J.P.; Pluim, J.P.W.; Fichtinger, G.; Martel, A.; Peters, T.

    2011-01-01

    We present a stochastic optimisation method for intensity-based monomodal image registration. The method is based on a Robbins-Monro stochastic gradient descent method with adaptive step size estimation, and adds a preconditioning matrix. The derivation of the pre-conditioner is based on the

  7. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  8. optimisation of compressive strength of periwinkle shell aggregate

    African Journals Online (AJOL)

    user

    2017-01-01

    Jan 1, 2017 ... In this paper, a regression model is developed to predict and optimise the compressive strength of periwinkle shell aggregate concrete using Scheffe's regression theory. The results obtained from the derived regression model agreed favourably with the experimental data. The model was tested for ...

  9. Smart optimisation and sensitivity analysis in water distribution systems

    CSIR Research Space (South Africa)

    Page, Philip R

    2015-12-01

    Full Text Available optimisation of a water distribution system by keeping the average pressure unchanged as water demands change, by changing the speed of the pumps. Another application area considered, using the same mathematical notions, is the study of the sensitivity...

  10. Estimators for initial conditions for optimisation in learning hydraulic systems

    NARCIS (Netherlands)

    Post, W.J.A.E.M.; Burrows, C.R.; Edge, K.A.

    1998-01-01

    In Learning Hydraulic Systems (LHS1. developed at the Eindhoven University of Technology, a specialised optimisation routine is employed In order to reduce energy losses in hydraulic systems. Typical load situations which can be managed by LHS are variable cyclic loads, as can be observed In many

  11. Optimisation of searches for Supersymmetry with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Zvolsky, Milan

    2012-01-15

    The ATLAS experiment is one of the four large experiments at the Large Hadron Collider which is specifically designed to search for the Higgs boson and physics beyond the Standard Model. The aim of this thesis is the optimisation of searches for Supersymmetry in decays with two leptons and missing transverse energy in the final state. Two different optimisation studies have been performed for two important analysis aspects: The final signal region selection and the choice of the trigger selection. In the first part of the analysis, a cut-based optimisation of signal regions is performed, maximising the signal for a minimal background contamination. By this, the signal yield can in parts be more than doubled. The second approach is to introduce di-lepton triggers which allow to lower the lepton transverse momentum threshold, thus enhancing the number of selected signal events significantly. The signal region optimisation was considered for the choice of the final event selection in the ATLAS di-lepton analyses. The trigger study contributed to the incorporation of di-lepton triggers to the ATLAS trigger menu. (orig.)

  12. Optimisation Study on the Production of Anaerobic Digestate ...

    African Journals Online (AJOL)

    Organic fraction of municipal solid waste (OFMSW) is a rich substrate for biogas and compost production. Anaerobic Digestate compost (ADC) is an organic fertilizer produced from stabilized residuals of anaerobic digestion of OFMSW. This paper reports the result of studies carried out to optimise the production of ADC from ...

  13. Optimising performance in steady state for a supermarket refrigeration system

    DEFF Research Database (Denmark)

    Green, Torben; Kinnaert, Michel; Razavi-Far, Roozbeh

    2012-01-01

    Using a supermarket refrigeration system as an illustrative example, the paper postulates that by appropriately utilising knowledge of plant operation, the plant wide performance can be optimised based on a small set of variables. Focusing on steady state operations, the total system performance...

  14. A Bayesian Approach for Sensor Optimisation in Impact Identification

    Directory of Open Access Journals (Sweden)

    Vincenzo Mallardo

    2016-11-01

    Full Text Available This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence.

  15. Optimisation of selective breeding program for Nile tilapia (Oreochromis niloticus)

    NARCIS (Netherlands)

    Trong, T.Q.

    2013-01-01

    The aim of this thesis was to optimise the selective breeding program for Nile tilapia in the Mekong Delta region of Vietnam. Two breeding schemes, the “classic” BLUP scheme following the GIFT method (with pair mating) and a rotational mating scheme with own performance selection and

  16. Optimised cantilever biosensor with piezoresistive read-out

    DEFF Research Database (Denmark)

    Rasmussen, Peter; Thaysen, J.; Hansen, Ole

    2003-01-01

    We present a cantilever-based biochemical sensor with piezoresistive read-out which has been optimised for measuring surface stress. The resistors and the electrical wiring on the chip are encapsulated in low-pressure chemical vapor deposition (LPCVD) silicon nitride, so that the chip is well sui...

  17. Optimisation of Heterogeneous Migration Paths to High Bandwidth Home Connections

    NARCIS (Netherlands)

    Phillipson, F.

    2017-01-01

    Operators are building architectures and systems for delivering voice, audio, and data services at the required speed for now and in the future. For fixed access networks, this means in many countries a shift from copper based to fibre based access networks. This paper proposes a method to optimise

  18. Optimisation of wort production from rice malt using enzymes and ...

    African Journals Online (AJOL)

    Commercially, rice malt has never been successfully used in brewing because of its low free α-amino nitrogen (FAN) content. This study was designed to optimise rice malt replacement for barley malt in wort production and to improve FAN by adding α-amylase and protease. The response surface methodology (RSM) ...

  19. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  20. Performance optimisations for distributed analysis in ALICE

    International Nuclear Information System (INIS)

    Betev, L; Gheata, A; Grigoras, C; Hristov, P; Gheata, M

    2014-01-01

    Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with ''sensors'' collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis

  1. Optimal design of CHP-based microgrids: Multiobjective optimisation and life cycle assessment

    International Nuclear Information System (INIS)

    Zhang, Di; Evangelisti, Sara; Lettieri, Paola; Papageorgiou, Lazaros G.

    2015-01-01

    As an alternative to current centralised energy generation systems, microgrids are adopted to provide local energy with lower energy expenses and gas emissions by utilising distributed energy resources (DER). Several micro combined heat and power technologies have been developed recently for applications at domestic scale. The optimal design of DERs within CHP-based microgrids plays an important role in promoting the penetration of microgrid systems. In this work, the optimal design of microgrids with CHP units is addressed by coupling environmental and economic sustainability in a multi-objective optimisation model which integrates the results of a life cycle assessment of the microgrids investigated. The results show that the installation of multiple CHP technologies has a lower cost with higher environmental saving compared with the case when only a single technology is installed in each site, meaning that the microgrid works in a more efficient way when multiple technologies are selected. In general, proton exchange membrane (PEM) fuel cells are chosen as the basic CHP technology for most solutions, which offers lower environmental impacts at low cost. However, internal combustions engines (ICE) and Stirling engines (SE) are preferred if the heat demand is high. - Highlights: • Optimal design of microgrids is addressed by coupling environmental and economic aspects. • An MILP model is formulated based on the ε-constraint method. • The model selects a combination of CHP technologies with different technical characteristics for optimum scenarios. • The global warming potential (GWP) and the acidification potential (AP) are determined. • The output of LCA is used as an input for the optimisation model

  2. A multitransputer parallel processing system (MTPPS)

    International Nuclear Information System (INIS)

    Jethra, A.K.; Pande, S.S.; Borkar, S.P.; Khare, A.N.; Ghodgaonkar, M.D.; Bairi, B.R.

    1993-01-01

    This report describes the design and implementation of a 16 node Multi Transputer Parallel Processing System(MTPPS) which is a platform for parallel program development. It is a MIMD machine based on message passing paradigm. The basic compute engine is an Inmos Transputer Ims T800-20. Transputer with local memory constitutes the processing element (NODE) of this MIMD architecture. Multiple NODES can be connected to each other in an identifiable network topology through the high speed serial links of the transputer. A Network Configuration Unit (NCU) incorporates the necessary hardware to provide software controlled network configuration. System is modularly expandable and more NODES can be added to the system to achieve the required processing power. The system is backend to the IBM-PC which has been integrated into the system to provide user I/O interface. PC resources are available to the programmer. Interface hardware between the PC and the network of transputers is INMOS compatible. Therefore, all the commercially available development software compatible to INMOS products can run on this system. While giving the details of design and implementation, this report briefly summarises MIMD Architectures, Transputer Architecture and Parallel Processing Software Development issues. LINPACK performance evaluation of the system and solutions of neutron physics and plasma physics problem have been discussed along with results. (author). 12 refs., 22 figs., 3 tabs., 3 appendixes

  3. Optimisation of a novel trailing edge concept for a high lift device

    CSIR Research Space (South Africa)

    Botha, JDM

    2014-09-01

    Full Text Available A novel concept (referred to as the flap extension) is implemented on the leading edge of the flap of a three element high lift device. The concept is optimised using two optimisation approaches based on Genetic Algorithm optimisations. A zero order...

  4. The development and use of plant models to assist with both the commissioning and performance optimisation of plant control systems

    International Nuclear Information System (INIS)

    Conner, A.S.; Region, S.E.

    1984-01-01

    Successful engagement of cascade control systems used to control complex nuclear plant often present control engineers with difficulties when trying to obtain early automatic operation of these systems. These difficulties often arise because prior to the start of live plant operation, control equipment performance can only be assessed using open loop techniques. By simulating simple models of plant on a computer and linking it to the site control equipment, the performance of the system can be examined and optimised prior to live plant operation. This significantly reduces the plant down time required to correct control equipment performance faults during live plant operation

  5. Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM) and genetic algorithm method (GA)

    Science.gov (United States)

    Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.

  6. ExRET-Opt: An automated exergy/exergoeconomic simulation framework for building energy retrofit analysis and design optimisation

    International Nuclear Information System (INIS)

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul; Morillón Gálvez, David

    2017-01-01

    Highlights: • Development of a building retrofit-oriented exergoeconomic-based optimisation tool. • A new exergoeconomic cost-benefit indicator is developed for design comparison. • Thermodynamic and thermal comfort variables used as constraints and/or objectives. • Irreversibilities and exergetic cost for end-use processes are substantially reduced. • Robust methodology that should be pursued in everyday building retrofit practice. - Abstract: Energy simulation tools have a major role in the assessment of building energy retrofit (BER) measures. Exergoeconomic analysis and optimisation is a common practice in sectors such as the power generation and chemical processes, aiding engineers to obtain more energy-efficient and cost-effective energy systems designs. ExRET-Opt, a retrofit-oriented modular-based dynamic simulation framework has been developed by embedding a comprehensive exergy/exergoeconomic calculation method into a typical open-source building energy simulation tool (EnergyPlus). The aim of this paper is to show the decomposition of ExRET-Opt by presenting modules, submodules and subroutines used for the framework’s development as well as verify the outputs with existing research data. In addition, the possibility to perform multi-objective optimisation analysis based on genetic-algorithms combined with multi-criteria decision making methods was included within the simulation framework. This addition could potentiate BER design teams to perform quick exergy/exergoeconomic optimisation, in order to find opportunities for thermodynamic improvements along the building’s active and passive energy systems. The enhanced simulation framework is tested using a primary school building as a case study. Results demonstrate that the proposed simulation framework provide users with thermodynamic efficient and cost-effective designs, even under tight thermodynamic and economic constraints, suggesting its use in everyday BER practice.

  7. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    Science.gov (United States)

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. digital control of external devices through the parallel port

    African Journals Online (AJOL)

    2012-11-03

    Nov 3, 2012 ... PARALLEL PORT OF A COMPUTER USING VISUAL BASIC. P.E. Orukpea, A. Adesemowob a. Department of Electrical/Electronic Engineering, University of Benin, Nigeria. .... These are software (written program) used for con- trolling the ..... Computer Aided Design and Manufacturing. Prentice. Hall, First ...

  9. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  10. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  11. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  12. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  13. Reuse of low specific activity material as part of LLWR design optimisation

    International Nuclear Information System (INIS)

    Huntington, Amy; Cummings, Richard; Shevelan, John; Sumerling, Trevor; Baker, Andrew J.

    2013-01-01

    A final cap will be emplaced over the disposed waste as part of the closure engineering for the UK's Low Level Waste Repository (LLWR). Additional profiling material will be required above the waste to obtain the required land-form. Consideration has been given to the potential opportunity to reuse Low Specific Activity Material (LSAM, defined as up to 200 Bq g -1 ) imported from other sites as a component of the necessary profiling material for the final repository cap. Justification of such a strategy would ultimately require a demonstration that the solution is optimal with respect to other options for the long-term management of such materials. The proposal is currently at the initial evaluation stage and seeks to establish how LSAM reuse within the cap could be achieved within the framework of an optimised safety case for the LLWR, should such a management approach be pursued. The key considerations include the following: The LSAM must provide the same engineering function as the remainder of the profiling material. The cap design must ensure efficient leachate collection, drainage and control for Low Level Waste (LLW) (and, by extension, LSAM) during the Period of Authorisation. In the longer term the engineering design must passively direct any accumulating waters preferentially away from surface water systems. An initial design has been developed that would allow the placement of around 220,000 m 3 of LSAM. The potential impact of the proposal has been assessed against the current Environmental Safety Case. (authors)

  14. Design of a chemical batch plant : a study of dedicated parallel lines with intermediate storage and the plant performance

    OpenAIRE

    Verbiest, Floor; Cornelissens, Trijntje; Springael, Johan

    2016-01-01

    Abstract: Production plants worldwide face huge challenges in satisfying high service levels and outperforming competition. These challenges require appropriate strategic decisions on plant design and production strategies. In this paper, we focus on multiproduct chemical batch plants, which are typically equipped with multiple production lines and intermediate storage tanks. First we extend the existing MI(N) LP design models with the concept of parallel production lines, and optimise the as...

  15. Mathematics for Physicists and Engineers.

    Science.gov (United States)

    Organisation for Economic Cooperation and Development, Paris (France).

    The text is a report of the OEEC Seminar on "The Mathematical Knowledge Required by the Physicist and Engineer" held in Paris, 1961. There are twelve major papers presented: (1) An American Parallel (describes the work of the Panel on Physical Sciences and Engineering of the Committee on the Undergraduate Program in Mathematics of the Mathematical…

  16. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  17. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  18. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  19. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  20. Optimisation of quality in environmental education by means of software support

    Directory of Open Access Journals (Sweden)

    Katarína Čekanová

    2015-12-01

    Full Text Available The main topic of this article is based on the fact that environmental education and edification have got an irreplaceable and preferred position within the framework of a sustainable socio-economic development. Environmental education, which is performed at technical universities, has to offer professional and methodical knowledge concerning questions of environment for students of various technical branches. This education is performed in such way that the graduates, after entering the practical professional life, will be able to participate in solutions to the new actual problems that are related to environment and its protection, as well. Nowadays, during the educational proces it is also necessary to introduce technical development in a more extended rate. Taking into consideration the above-mentioned facts it is possible to say that the educational support for environmental studies is a relevant aspect, which should be integrated into the university educational process. It is a positive development trend that greater emphasis is focused on the quality of university education for the environmental engineers. Our society requires an increasing number of environmentally educated engineers who are able to participate in qualitative academic preparation, i.e. the so-called environmentalists. But the worldwide phenomena of technical development and globalisation also pose high claims for quality of their preparations including devices and computers skills. The Department of Process and Environmental Engineering at the Faculty of Mechanical Engineering, Technical University in Košice, the Slovak Republic is the institution specified and intended for quality optimisation. This Department introduced into the study programmes (“Environmental Management” and “Technology of Environmental Protection” study subjects with software support, which are oriented towards the indoor and outdoor environment and in this way the Department of Process and

  1. Développement d'un moteur 4-soupapes fonctionnant en mélange dilué. Une nouvelle approche basée sur l'optimisation de l'aérodynamique interne Application of Flow Field Optimization to Lean Burn Engine Development. A New Approach Based on Internal Flow Field Optimization

    Directory of Open Access Journals (Sweden)

    Henriot S.

    2006-11-01

    entre 0,60 et 0,70 suivant la charge et le régime. La diminution des émissions d'oxydes d'azote qui en résulte atteint en moyenne sur des points représentatifs du cycle ECE15 74 % par rapport au fonctionnement à la stoechiométrie. Enfin, des essais avec recyclage des gaz d'échappement ont démontré le potentiel de la solution optimisée d'un point de vue aérodynamique à tout mode de fonctionnement en mélange dilué. The constraints that must be accounted for inbuilding a new engine are increasing. It is necessary to contend with user demands for performance, while improving efficiency, limiting pollutant emissions and reducing knock sensitivity. Given the number of parameters to be taken into account, the response to these demands would require large investments in time and resources if conventional development methods (which usually follow an empirical approach were applied. The purpose of this research supported by Groupement Scientifique Moteur (GSM was to design a lean-burn sparkignition engine with the object of reducing pollutant emissions and fuel consumption. We try to develop a promising new approach that follows a scientific methodology based in particular on the use of predictive computer codes. The objective of this project is to design an engine that runs on a lean or possibly dilute mixture that will meet European pollution regulations. This lean-burn solution, combined with oxidation catalysis, is considered as an alternative to threeway catalysis, which imposes operation at stoichiometry. The original configuration is a 4-valve engine. One of the advantages this engine offers is great flexibility in changing the inlet conditions. This provides a way of optimizing internal fluid motion, which turns out to be a determining factor in the ability to operate with a lean or dilute mixture. The operation of an engine with a lean or dilute mixture results in substantially reducing nitrogen oxide (NOx emissions, virtually eliminating carbon dioxide (CO

  2. Optimizing the gear efficiency under consideration of thermal optimisation strategies and customer-specific load conditions; Optimierung des Getriebewirkungsgrads unter Beruecksichtigung thermischer Optimierungsstrategien und kundenspezifischer Lastkollektive

    Energy Technology Data Exchange (ETDEWEB)

    Inderwisch, Kathrien; Kuecuekay, Ferit [Technische Univ. Braunschweig (Germany). Inst. fuer Fahrzeugtechnik

    2012-11-01

    Nowadays, the automotive industry have been received more attention to improve the transmission efficiency. Most of the researches have been concentrated on development and optimisation on transmission actuators, shifting elements, bearings, lubricants or lightweight constructions. Due to the low load requirements and associated low efficiencies for transmissions in driving cycles the transmissions cause energy losses which cannot be neglected. Two main stategies can be followed up for the optimisation of transmission efficiency. At first the efficiency benefit of transmissions through optimisation of hardware components will be presented. The second possibility is the representation of an optimal thermal management especially at low temperatures. Warming-up the transmission oil or transmission components can increase the efficiency of transmissions significantly. Techniques like this become more important in the course of electrification of drive trains and therefore decreased availability of heat. A simulation tool for calculation and minimisation of power loss for manual and dual-clutch transmissions was developed at the Institute of Automotive Engineering and verified by measurements. The simulation tool calculates the total transmission efficiency as well as the losses of individual transmission components depending on various environmental conditions. In this paper, the results in terms of increasing the efficiency of transmissions by optimisation of hardware components will be presented. Furthermore, the effects of temperature distribution in the transmission as well as the potential of minimising loss at low temperatures through thermal management will be illustrated. (orig.)

  3. Optimisation of X-ray examinations: General principles and an Irish perspective

    International Nuclear Information System (INIS)

    Matthews, Kate; Brennan, Patrick C.

    2009-01-01

    In Ireland, the European Medical Exposures Directive [Council Directive 97/43] was enacted into national law in Statutory Instrument 478 of 2002. This series of three review articles discusses the status of justification and optimisation of X-ray examinations nationally, and progress with the establishment of Irish diagnostic reference levels. In this second article, literature relating to optimisation issues arising in SI 478 of 2002 is reviewed. Optimisation associated with X-ray equipment and optimisation during day-to-day practice are considered. Optimisation proposals found in published research are summarised, and indicate the complex nature of optimisation. A paucity of current, research-based guidance documentation is identified. This is needed in order to support a range of professional staff in their practical implementation of optimisation.

  4. Mesh dependence in PDE-constrained optimisation an application in tidal turbine array layouts

    CERN Document Server

    Schwedes, Tobias; Funke, Simon W; Piggott, Matthew D

    2017-01-01

    This book provides an introduction to PDE-constrained optimisation using finite elements and the adjoint approach. The practical impact of the mathematical insights presented here are demonstrated using the realistic scenario of the optimal placement of marine power turbines, thereby illustrating the real-world relevance of best-practice Hilbert space aware approaches to PDE-constrained optimisation problems. Many optimisation problems that arise in a real-world context are constrained by partial differential equations (PDEs). That is, the system whose configuration is to be optimised follows physical laws given by PDEs. This book describes general Hilbert space formulations of optimisation algorithms, thereby facilitating optimisations whose controls are functions of space. It demonstrates the importance of methods that respect the Hilbert space structure of the problem by analysing the mathematical drawbacks of failing to do so. The approaches considered are illustrated using the optimisation problem arisin...

  5. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  6. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  7. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    The design and control of recycling systems of products with environmental risk have been discussed in the world already for a long time. The main reasons to address this subject are the followings: reduction of waste volume, intensification of recycling of materials, closing the loop, use of less resource, reducing environmental risk [1, 2]. The development of recycling systems is based on the integrated solution of technological and logistic resources and know-how [3]. However the financial conditions of recycling systems is partly based on the recovery, disassembly and remanufacturing options of the used products [4, 5, 6], but the investment and operation costs of recycling systems can be characterised with high logistic costs caused by the geographically wide collection system with more collection level and a high number of operation points of the inverse supply chain. The reduction of these costs is a popular area of the logistics researches. These researches include the design and implementation of comprehensive environmental waste and recycling program to suit business strategies (global system), design and supply all equipment for production line collection (external system), design logistics process to suit the economical and ecological requirements (external system) [7]. To the knowledge of the author, there has been no research work on supply chain design problems that purpose is the logistics oriented optimisation of inverse supply chain in the case of non-linear total cost function consisting not only operation costs but also environmental risk cost. The antecedent of this research is, that the author has taken part in some research projects in the field of closed loop economy ("Closing the loop of electr(on)ic products and domestic appliances from product planning to end-of-life technologies), environmental friendly disassembly (Concept for logistical and environmental disassembly technologies) and design of recycling systems of household appliances

  8. DEPOSITION DISTRICUTION AMONG THE PARALLEL PATHWAYS IN THE HUMAN LUNG CONDUCTING AIRWAY STRUCTURE.

    Science.gov (United States)

    DEPOSITION DISTRIBUTION AMONG THE PARALLEL PATHWAYS IN THE HUMAN LUNG CONDUCTING AIRWAY STRUCTURE. Chong S. Kim*, USEPA National Health and Environmental Effects Research Lab. RTP, NC 27711; Z. Zhang and C. Kleinstreuer, Department of Mechanical and Aerospace Engineering, North C...

  9. Model Driven Engineering

    Science.gov (United States)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  10. Current NRPB recommendations on optimisation of protection of workers

    International Nuclear Information System (INIS)

    Wrixon, A.D.

    1994-01-01

    The National Radiological Protection Board is required by Ministerial Direction to provide advice on the relevance of the recommendations of the International Commission on Radiological Protection to the UK. Its advice was published in the Spring of 1993 after a period of consultation. In this article, which formed the basis of a presentation at an SRP Meeting on 29 April 1994, the Board's advice on the optimisation of protection of workers is explored and presented in the context of the developments in the understanding of the principle that have taken place in recent years. The most significant developments are the realisation that implementation of the principle is an essential function of good management and the recognition that the interests of the individual are not sufficiently taken into account by the dose limits alone but doses to individuals should be both constrained and optimised. (author)

  11. Optimised Design and Analysis of All-Optical Networks

    DEFF Research Database (Denmark)

    Glenstrup, Arne John

    2002-01-01

    through various experiments and is shown to produce good results and to be able to scale up to networks of realistic sizes. A novel method, subpath wavelength grouping, for routing connections in a multigranular all-optical network where several wavelengths can be grouped and switched at band and fibre......This PhD thesis presents a suite of methods for optimising design and for analysing blocking probabilities of all-optical networks. It thus contributes methodical knowledge to the field of computer assisted planning of optical networks. A two-stage greenfield optical network design optimiser...... is developed, based on shortest-path algorithms and a comparatively new metaheuristic called simulated allocation. It is able to handle design of all-optical mesh networks with optical cross-connects, considers duct as well as fibre and node costs, and can also design protected networks. The method is assessed...

  12. Optimised mounting conditions for poly (ether sulfone) in radiation detection.

    Science.gov (United States)

    Nakamura, Hidehito; Shirakawa, Yoshiyuki; Sato, Nobuhiro; Yamada, Tatsuya; Kitamura, Hisashi; Takahashi, Sentaro

    2014-09-01

    Poly (ether sulfone) (PES) is a candidate for use as a scintillation material in radiation detection. Its characteristics, such as its emission spectrum and its effective refractive index (based on the emission spectrum), directly affect the propagation of light generated to external photodetectors. It is also important to examine the presence of background radiation sources in manufactured PES. Here, we optimise the optical coupling and surface treatment of the PES, and characterise its background. Optical grease was used to enhance the optical coupling between the PES and the photodetector; absorption by the grease of short-wavelength light emitted from PES was negligible. Diffuse reflection induced by surface roughening increased the light yield for PES, despite the high effective refractive index. Background radiation derived from the PES sample and its impurities was negligible above the ambient, natural level. Overall, these results serve to optimise the mounting conditions for PES in radiation detection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Optimisation of patient and staff exposure in interventional cardiology

    International Nuclear Information System (INIS)

    Padovani, R.; Malisan, M.R.; Bernardi, G.; Vano, E.; Neofotistou, V.

    2001-01-01

    The Council Directive of the European Community 97/43/Euratom (MED) deals with the health protection of individuals against dangers of ionising radiation in relation to medical exposure, and also focuses attention on some special practices (Art. 9), including interventional radiology, a technique involving high doses to the patient. The paper presents the European approach to optimisation of exposure in interventional cardiology. The DIMOND research consortium (DIMOND: Digital Imaging: Measures for Optimising Radiological Information Content and Dose) is working to develop quality criteria for cineangiographic images, to develop procedures for the classification of complexity of therapeutic and diagnostic procedures and to derive reference levels, related also to procedure complexity. DIMOND project also includes aspects of equipment characteristics and performance and content of training in radiation protection of personnel working in interventional radiology field. (author)

  14. Biomass supply chain optimisation for Organosolv-based biorefineries.

    Science.gov (United States)

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Optimisation of VSC-HVDC Transmission for Wind Power Plants

    DEFF Research Database (Denmark)

    Silva, Rodrigo Da

    Connection of Wind Power Plants (WPP), typically oshore, using VSCHVDC transmission is an emerging solution with many benefits compared to the traditional AC solution, especially concerning the impact on control architecture of the wind farms and the grid. The VSC-HVDC solution is likely to meet...... more stringent grid codes than a conventional AC transmission connection. The purpose of this project is to analyse how HVDC solution, considering the voltage-source converter based technology, for grid connection of large wind power plants can be designed and optimised. By optimisation, the project...... the robust control technique is applied is compared with the classical proportional-integral (PI) performance, by means of time domain simulation in a point-to-point HVDC connection. The three main parameters in the discussion are the wind power delivered from the offshore wind power plant, the variation...

  16. Computed tomography dose optimisation in cystic fibrosis: A review.

    LENUS (Irish Health Repository)

    Ferris, Helena

    2016-04-28

    Cystic fibrosis (CF) is the most common autosomal recessive disease of the Caucasian population worldwide, with respiratory disease remaining the most relevant source of morbidity and mortality. Computed tomography (CT) is frequently used for monitoring disease complications and progression. Over the last fifteen years there has been a six-fold increase in the use of CT, which has lead to a growing concern in relation to cumulative radiation exposure. The challenge to the medical profession is to identify dose reduction strategies that meet acceptable image quality, but fulfil the requirements of a diagnostic quality CT. Dose-optimisation, particularly in CT, is essential as it reduces the chances of patients receiving cumulative radiation doses in excess of 100 mSv, a dose deemed significant by the United Nations Scientific Committee on the Effects of Atomic Radiation. This review article explores the current trends in imaging in CF with particular emphasis on new developments in dose optimisation.

  17. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  18. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  19. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  20. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.