WorldWideScience

Sample records for efficient optimisation-based method

  1. Optimisation of efficiency of axial fans

    NARCIS (Netherlands)

    Kruyt, Nicolaas P.; Pennings, P.C.; Faasen, R.

    2014-01-01

    A three-stage research project has been executed to develop ducted axial-fans with increased efficiency. In the first stage a design method has been developed in which various conflicting design criteria can be incorporated. Based on this design method, an optimised design has been determined

  2. Efficient topology optimisation of multiscale and multiphysics problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...

  3. The bases for optimisation of scheduled repairs and tests of safety systems to improve the NPP productive efficiency

    International Nuclear Information System (INIS)

    Bilej, D.V.; Vasil'chenko, S.V.; Vlasenko, N.I.; Vasil'chenko, V.N.; Skalozubov, V.I.

    2004-01-01

    In the frames of risk-informed approaches the paper proposed the theoretical bases for methods of optimisation of scheduled repairs and tests of safety systems at nuclear power plants. The optimisation criterion is the objective risk function minimising. This function depends on the scheduled repairs/tests periodicity and the allowed time to bring the system channel to a state of non-operability. The main optimisation direct is to reduce the repair time with the purpose of enhancement of productive efficiency

  4. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  5. Optimisation of trawl energy efficiency under fishing effort constraint

    OpenAIRE

    Priour, Daniel; Khaled, Ramez

    2009-01-01

    Trawls energy efficiency is greatly affected by the drag, as well as by the swept area. The drag results in an increase of the energy consumption and the sweeping influences the catch. Methods of optimisation of the trawl design have been developed in order to reduce the volume of carburant per kg of caught fish and consequently the drag per swept area of the trawl. Based on a finite element method model for flexible netting structures, the tool modifies step by step a reference design. For e...

  6. Optimising Job-Shop Functions Utilising the Score-Function Method

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ......During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging...... of a Job-Shop can be handled by the SF method....

  7. A supportive architecture for CFD-based design optimisation

    Science.gov (United States)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  8. Beam position optimisation for IMRT

    International Nuclear Information System (INIS)

    Holloway, L.; Hoban, P.

    2001-01-01

    Full text: The introduction of IMRT has not generally resulted in the use of optimised beam positions because to find the global solution of the problem a time consuming stochastic optimisation method must be used. Although a deterministic method may not achieve the global minimum it should achieve a superior dose distribution compared to no optimisation. This study aimed to develop and test such a method. The beam optimisation method developed relies on an iterative process to achieve the desired number of beams from a large initial number of beams. The number of beams is reduced in a 'weeding-out' process based on the total fluence which each beam delivers. The process is gradual, with only three beams removed each time (following a small number of iterations), ensuring that the reduction in beams does not dramatically affect the fluence maps of those remaining. A comparison was made between the dose distributions achieved when the beams positions were optimised in this fashion and when the beams positions were evenly distributed. The method has been shown to work quite effectively and efficiently. The Figure shows a comparison in dose distribution with optimised and non optimised beam positions for 5 beams. It can be clearly seen that there is an improvement in the dose distribution delivered to the tumour and a reduction in the dose to the critical structure with beam position optimisation. A method for beam position optimisation for use in IMRT optimisations has been developed. This method although not necessarily achieving the global minimum in beam position still achieves quite a dramatic improvement compared with no beam position optimisation and is very efficiently achieved. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  9. An efficient optimisation method in groundwater resource ...

    African Journals Online (AJOL)

    DRINIE

    2003-10-04

    Oct 4, 2003 ... theories developed in the field of stochastic subsurface hydrology. In reality, many ... Recently, some researchers have applied the multi-stage ... Then a robust solution of the optimisation problem given by Eqs. (1) to (3) is as ...

  10. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    Science.gov (United States)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  11. Optimisation of test and maintenance based on probabilistic methods

    International Nuclear Information System (INIS)

    Cepin, M.

    2001-01-01

    This paper presents a method, which based on models and results of probabilistic safety assessment, minimises the nuclear power plant risk by optimisation of arrangement of safety equipment outages. The test and maintenance activities of the safety equipment are timely arranged, so the classical static fault tree models are extended with the time requirements to be capable to model real plant states. A house event matrix is used, which enables modelling of the equipment arrangements through the discrete points of time. The result of the method is determination of such configuration of equipment outages, which result in the minimal risk. Minimal risk is represented by system unavailability. (authors)

  12. Computer Based Optimisation Rutines

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  13. Optimising Reactive Control in non-ideal Efficiency Wave Energy Converters

    DEFF Research Database (Denmark)

    Strager, Thomas; Lopez, Pablo Fernandez; Giorgio, Giuseppe

    2014-01-01

    When analytically optimising the control strategy in wave energy converters which use a point absorber, the efficiency aspect is generally neglected. The results presented in this paper provide an analytical expression for the mean harvested electrical power in non-ideal efficiency situations....... These have been derived under the assumptions of monochromatic incoming waves and linear system behaviour. This allows to establish the power factor of a system with non-ideal efficiency. The locus of the optimal reactive control parameters is then studied and an alternative method of representation...... is developed to model the optimal control parameters. Ultimately we present a simple method of choosing optimal control parameters for any combination of efficiency and wave frequency....

  14. Agent-Based Decision Control—How to Appreciate Multivariate Optimisation in Architecture

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas Holmer; Kolarik, Jakub

    2015-01-01

    , the method is applied to a multivariate optimisation problem. The aim is specifically to demonstrate optimisation for entire building energy consumption, daylight distribution and capital cost. Based on the demonstrations Moth’s ability to find local minima is discussed. It is concluded that agent-based...... in the early design stage. The main focus is to demonstrate the optimisation method, which is done in two ways. Firstly, the newly developed agent-based optimisation algorithm named Moth is tested on three different single objective search spaces. Here Moth is compared to two evolutionary algorithms. Secondly...... optimisation algorithms like Moth open up for new uses of optimisation in the early design stage. With Moth the final outcome is less dependent on pre- and post-processing, and Moth allows user intervention during optimisation. Therefore, agent-based models for optimisation such as Moth can be a powerful...

  15. Multi-Optimisation Consensus Clustering

    Science.gov (United States)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  16. Optimisation of technical specifications using probabilistic methods

    International Nuclear Information System (INIS)

    Ericsson, G.; Knochenhauer, M.; Hultqvist, G.

    1986-01-01

    During the last few years the development of methods for modifying and optimising nuclear power plant Technical Specifications (TS) for plant operations has received increased attention. Probalistic methods in general, and the plant and system models of probabilistic safety assessment (PSA) in particular, seem to provide the most forceful tools for optimisation. This paper first gives some general comments on optimisation, identifying important parameters and then gives a description of recent Swedish experiences from the use of nuclear power plant PSA models and results for TS optimisation

  17. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    Science.gov (United States)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  18. A comparison of an energy/economic-based against an exergoeconomic-based multi-objective optimisation for low carbon building energy design

    International Nuclear Information System (INIS)

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul; Morillón Gálvez, David

    2017-01-01

    This study presents a comparison of the optimisation of building energy retrofit strategies from two different perspectives: an energy/economic-based analysis and an exergy/exergoeconomic-based analysis. A recently retrofitted community centre is used as a case study. ExRET-Opt, a novel building energy/exergy simulation tool with multi-objective optimisation capabilities based on NSGA-II is used to run both analysis. The first analysis, based on the 1st Law only, simultaneously optimises building energy use and design's Net Present Value (NPV). The second analysis, based on the 1st and the 2nd Laws, simultaneously optimises exergy destructions and the exergoeconomic cost-benefit index. Occupant thermal comfort is considered as a common objective function for both approaches. The aim is to assess the difference between the methods and calculate the performance among main indicators, considering the same decision variables and constraints. Outputs show that the inclusion of exergy/exergoeconomics as objective functions into the optimisation procedure has resulted in similar 1st Law and thermal comfort outputs, while providing solutions with less environmental impact under similar capital investments. This outputs demonstrate how the 1st Law is only a necessary calculation while the utilisation of the 1st and 2nd Laws becomes a sufficient condition for the analysis and design of low carbon buildings. - Highlights: • The study compares an energy-based and an exergy-based building design optimisation. • Occupant thermal comfort is considered as a common objective function. • A comparison of thermodynamic outputs is made against the actual retrofit design. • Under similar constraints, second law optimisation presents better overall results. • Exergoeconomic optimisation solutions improves building exergy efficiency to double.

  19. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  20. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    Science.gov (United States)

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  1. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    Science.gov (United States)

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  2. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    International Nuclear Information System (INIS)

    Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang

    2015-01-01

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method

  3. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)

    2015-09-08

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.

  4. Techno-economic optimisation of energy systems; Contribution a l'optimisation technico-economique de systemes energetiques

    Energy Technology Data Exchange (ETDEWEB)

    Mansilla Pellen, Ch

    2006-07-15

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  5. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    Science.gov (United States)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  6. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  7. Techno-economic optimisation of energy systems

    International Nuclear Information System (INIS)

    Mansilla Pellen, Ch.

    2006-07-01

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  8. Optimisation-based worst-case analysis and anti-windup synthesis for uncertain nonlinear systems

    Science.gov (United States)

    Menon, Prathyush Purushothama

    This thesis describes the development and application of optimisation-based methods for worst-case analysis and anti-windup synthesis for uncertain nonlinear systems. The worst-case analysis methods developed in the thesis are applied to the problem of nonlinear flight control law clearance for highly augmented aircraft. Local, global and hybrid optimisation algorithms are employed to evaluate worst-case violations of a nonlinear response clearance criterion, for a highly realistic aircraft simulation model and flight control law. The reliability and computational overheads associated with different opti misation algorithms are compared, and the capability of optimisation-based approaches to clear flight control laws over continuous regions of the flight envelope is demonstrated. An optimisation-based method for computing worst-case pilot inputs is also developed, and compared with current industrial approaches for this problem. The importance of explicitly considering uncertainty in aircraft parameters when computing worst-case pilot demands is clearly demonstrated. Preliminary results on extending the proposed framework to the problems of limit-cycle analysis and robustness analysis in the pres ence of time-varying uncertainties are also included. A new method for the design of anti-windup compensators for nonlinear constrained systems controlled using nonlinear dynamics inversion control schemes is presented and successfully applied to some simple examples. An algorithm based on the use of global optimisation is proposed to design the anti-windup compensator. Some conclusions are drawn from the results of the research presented in the thesis, and directions for future work are identified.

  9. Power law-based local search in spider monkey optimisation for lower order system modelling

    Science.gov (United States)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  10. Biomass supply chain optimisation for Organosolv-based biorefineries.

    Science.gov (United States)

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting

    Science.gov (United States)

    Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy

    2018-06-01

    Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.

  12. Optimisation and validation of methods to assess single nucleotide polymorphisms (SNPs) in archival histological material

    DEFF Research Database (Denmark)

    Andreassen, C N; Sørensen, Flemming Brandt; Overgaard

    2004-01-01

    only archival specimens are available. This study was conducted to validate protocols optimised for assessment of SNPs based on paraffin embedded, formalin fixed tissue samples.PATIENTS AND METHODS: In 137 breast cancer patients, three TGFB1 SNPs were assessed based on archival histological specimens...... precipitation).RESULTS: Assessment of SNPs based on archival histological material is encumbered by a number of obstacles and pitfalls. However, these can be widely overcome by careful optimisation of the methods used for sample selection, DNA extraction and PCR. Within 130 samples that fulfil the criteria...

  13. Optimisation of Software-Defined Networks Performance Using a Hybrid Intelligent System

    Directory of Open Access Journals (Sweden)

    Ann Sabih

    2017-06-01

    Full Text Available This paper proposes a novel intelligent technique that has been designed to optimise the performance of Software Defined Networks (SDN. The proposed hybrid intelligent system has employed integration of intelligence-based optimisation approaches with the artificial neural network. These heuristic optimisation methods include Genetic Algorithms (GA and Particle Swarm Optimisation (PSO. These methods were utilised separately in order to select the best inputs to maximise SDN performance. In order to identify SDN behaviour, the neural network model is trained and applied. The maximal optimisation approach has been identified using an analytical approach that considered SDN performance and the computational time as objective functions. Initially, the general model of the neural network was tested with unseen data before implementing the model using GA and PSO to determine the optimal performance of SDN. The results showed that the SDN represented by Artificial Neural Network ANN, and optmised by PSO, generated a better configuration with regards to computational efficiency and performance index.

  14. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  15. Mechatronic System Design Based On An Optimisation Approach

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael Rygaard

    The envisaged objective of this paper project is to extend the current state of the art regarding the design of complex mechatronic systems utilizing an optimisation approach. We propose to investigate a novel framework for mechatronic system design. The novelty and originality being the use...... of optimisation techniques. The methods used to optimise/design within the classical disciplines will be identified and extended to mechatronic system design....

  16. Multiobjective optimisation of bogie suspension to boost speed on curves

    Science.gov (United States)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  17. Metal Removal Process Optimisation using Taguchi Method - Simplex Algorithm (TM-SA) with Case Study Applications

    OpenAIRE

    Ajibade, Oluwaseyi A.; Agunsoye, Johnson O.; Oke, Sunday A.

    2018-01-01

    In the metal removal process industry, the current practice to optimise cutting parameters adoptsa conventional method. It is based on trial and error, in which the machine operator uses experience,coupled with handbook guidelines to determine optimal parametric values of choice. This method is notaccurate, is time-consuming and costly. Therefore, there is a need for a method that is scientific, costeffectiveand precise. Keeping this in mind, a different direction for process optimisation is ...

  18. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  19. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Science.gov (United States)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  20. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Science.gov (United States)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  1. Mutual information-based LPI optimisation for radar network

    Science.gov (United States)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  2. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    Science.gov (United States)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  3. Thermodynamic optimisation and analysis of four Kalina cycle layouts for high temperature applications

    International Nuclear Information System (INIS)

    Modi, Anish; Haglind, Fredrik

    2015-01-01

    The Kalina cycle has seen increased interest in the last few years as an efficient alternative to the conventional steam Rankine cycle. However, the available literature gives little information on the algorithms to solve or optimise this inherently complex cycle. This paper presents a detailed approach to solve and optimise a Kalina cycle for high temperature (a turbine inlet temperature of 500 °C) and high pressure (over 100 bar) applications using a computationally efficient solution algorithm. A central receiver solar thermal power plant with direct steam generation was considered as a case study. Four different layouts for the Kalina cycle based on the number and/or placement of the recuperators in the cycle were optimised and compared based on performance parameters such as the cycle efficiency and the cooling water requirement. The cycles were modelled in steady state and optimised with the maximisation of the cycle efficiency as the objective function. It is observed that the different cycle layouts result in different regions for the optimal value of the turbine inlet ammonia mass fraction. Out of the four compared layouts, the most complex layout KC1234 gives the highest efficiency. The cooling water requirement is closely related to the cycle efficiency, i.e., the better the efficiency, the lower is the cooling water requirement. - Highlights: • Detailed methodology for solving and optimising Kalina cycle for high temperature applications. • A central receiver solar thermal power plant with direct steam generation considered as a case study. • Four Kalina cycle layouts based on the placement of recuperators optimised and compared

  4. Geometrical exploration of a flux-optimised sodium receiver through multi-objective optimisation

    Science.gov (United States)

    Asselineau, Charles-Alexis; Corsi, Clothilde; Coventry, Joe; Pye, John

    2017-06-01

    A stochastic multi-objective optimisation method is used to determine receiver geometries with maximum second law efficiency, minimal average temperature and minimal surface area. The method is able to identify a set of Pareto optimal candidates that show advantageous geometrical features, mainly in being able to maximise the intercepted flux within the geometrical boundaries set. Receivers with first law thermal efficiencies ranging from 87% to 91% are also evaluated using the second law of thermodynamics and found to have similar efficiencies of over 60%, highlighting the influence that the geometry can play in the maximisation of the work output of receivers by influencing the distribution of the flux from the concentrator.

  5. Probabilistic sensitivity analysis of optimised preventive maintenance strategies for deteriorating infrastructure assets

    International Nuclear Information System (INIS)

    Daneshkhah, A.; Stocks, N.G.; Jeffrey, P.

    2017-01-01

    Efficient life-cycle management of civil infrastructure systems under continuous deterioration can be improved by studying the sensitivity of optimised preventive maintenance decisions with respect to changes in model parameters. Sensitivity analysis in maintenance optimisation problems is important because if the calculation of the cost of preventive maintenance strategies is not sufficiently robust, the use of the maintenance model can generate optimised maintenances strategies that are not cost-effective. Probabilistic sensitivity analysis methods (particularly variance based ones), only partially respond to this issue and their use is limited to evaluating the extent to which uncertainty in each input contributes to the overall output's variance. These methods do not take account of the decision-making problem in a straightforward manner. To address this issue, we use the concept of the Expected Value of Perfect Information (EVPI) to perform decision-informed sensitivity analysis: to identify the key parameters of the problem and quantify the value of learning about certain aspects of the life-cycle management of civil infrastructure system. This approach allows us to quantify the benefits of the maintenance strategies in terms of expected costs and in the light of accumulated information about the model parameters and aspects of the system, such as the ageing process. We use a Gamma process model to represent the uncertainty associated with asset deterioration, illustrating the use of EVPI to perform sensitivity analysis on the optimisation problem for age-based and condition-based preventive maintenance strategies. The evaluation of EVPI indices is computationally demanding and Markov Chain Monte Carlo techniques would not be helpful. To overcome this computational difficulty, we approximate the EVPI indices using Gaussian process emulators. The implications of the worked numerical examples discussed in the context of analytical efficiency and organisational

  6. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  7. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  8. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  9. Study on the evolutionary optimisation of the topology of network control systems

    Science.gov (United States)

    Zhou, Zude; Chen, Benyuan; Wang, Hong; Fan, Zhun

    2010-08-01

    Computer networks have been very popular in enterprise applications. However, optimisation of network designs that allows networks to be used more efficiently in industrial environment and enterprise applications remains an interesting research topic. This article mainly discusses the topology optimisation theory and methods of the network control system based on switched Ethernet in an industrial context. Factors that affect the real-time performance of the industrial control network are presented in detail, and optimisation criteria with their internal relations are analysed. After the definition of performance parameters, the normalised indices for the evaluation of the topology optimisation are proposed. The topology optimisation problem is formulated as a multi-objective optimisation problem and the evolutionary algorithm is applied to solve it. Special communication characteristics of the industrial control network are considered in the optimisation process. In respect to the evolutionary algorithm design, an improved arena algorithm is proposed for the construction of the non-dominated set of the population. In addition, for the evaluation of individuals, the integrated use of the dominative relation method and the objective function combination method, for reducing the computational cost of the algorithm, are given. Simulation tests show that the performance of the proposed algorithm is preferable and superior compared to other algorithms. The final solution greatly improves the following indices: traffic localisation, traffic balance and utilisation rate balance of switches. In addition, a new performance index with its estimation process is proposed.

  10. Self-optimisation and model-based design of experiments for developing a C–H activation flow process

    Directory of Open Access Journals (Sweden)

    Alexander Echtermeyer

    2017-01-01

    Full Text Available A recently described C(sp3–H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  11. Advanced optimisation - coal fired power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Turney, D.M.; Mayes, I. [E.ON UK, Nottingham (United Kingdom)

    2005-03-01

    The purpose of this unit optimization project is to develop an integrated approach to unit optimisation and develop an overall optimiser that is able to resolve any conflicts between the individual optimisers. The individual optimisers have been considered during this project are: on-line thermal efficiency package, GNOCIS boiler optimiser, GNOCIS steam side optimiser, ESP optimisation, and intelligent sootblowing system. 6 refs., 7 figs., 3 tabs.

  12. Parametric studies and optimisation of pumped thermal electricity storage

    International Nuclear Information System (INIS)

    McTigue, Joshua D.; White, Alexander J.; Markides, Christos N.

    2015-01-01

    Highlights: • PTES is modelled by cycle analysis and a Schumann-style model of the thermal stores. • Optimised trade-off surfaces show a flat efficiency vs. energy density profile. • Overall roundtrip efficiencies of around 70% are not inconceivable. - Abstract: Several of the emerging technologies for electricity storage are based on some form of thermal energy storage (TES). Examples include liquid air energy storage, pumped heat energy storage and, at least in part, advanced adiabatic compressed air energy storage. Compared to other large-scale storage methods, TES benefits from relatively high energy densities, which should translate into a low cost per MW h of storage capacity and a small installation footprint. TES is also free from the geographic constraints that apply to hydro storage schemes. TES concepts for electricity storage rely on either a heat pump or refrigeration cycle during the charging phase to create a hot or a cold storage space (the thermal stores), or in some cases both. During discharge, the thermal stores are depleted by reversing the cycle such that it acts as a heat engine. The present paper is concerned with a form of TES that has both hot and cold packed-bed thermal stores, and for which the heat pump and heat engine are based on a reciprocating Joule cycle, with argon as the working fluid. A thermodynamic analysis is presented based on traditional cycle calculations coupled with a Schumann-style model of the packed beds. Particular attention is paid to the various loss-generating mechanisms and their effect on roundtrip efficiency and storage density. A parametric study is first presented that examines the sensitivity of results to assumed values of the various loss factors and demonstrates the rather complex influence of the numerous design variables. Results of an optimisation study are then given in the form of trade-off surfaces for roundtrip efficiency, energy density and power density. The optimised designs show a

  13. Methodological principles for optimising functional MRI experiments

    International Nuclear Information System (INIS)

    Wuestenberg, T.; Giesel, F.L.; Strasburger, H.

    2005-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most common methods for localising neuronal activity in the brain. Even though the sensitivity of fMRI is comparatively low, the optimisation of certain experimental parameters allows obtaining reliable results. In this article, approaches for optimising the experimental design, imaging parameters and analytic strategies will be discussed. Clinical neuroscientists and interested physicians will receive practical rules of thumb for improving the efficiency of brain imaging experiments. (orig.) [de

  14. An Optimised Method to Determine PAHs in a Contaminated Soil; Metodo Optimizado para la Determinacion de PAHs en un Suelo Contaminado

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Alonso, S.; Perez Pastor, R. M.; Sevillano castano, M. L.; Escolano Segovia, O.; Garcia Frutos, F. J.

    2007-07-20

    An analytical study is presented based on an optimised method to determine selected polycyclic aromatic hydrocarbons (PAHs) by High Performance Liquid Chromatography (HPLC) with fluorescence detection. The work was focused to obtain reliable measurements of PAH in a gas work contaminated soil and was performed in the frame of the project 'Assessment of natural remediation technologies for PAHs in contaminated soils' (Spanish Plan Nacional l+D+i, CTM 2004-05832-CO2-01): First assays were focused to evaluate an initial proposed procedure by sonication extraction in the contaminated soil. Afterwards to extend the efficiency and reduce solvent and time consuming of extraction procedures, the more relevant parameters that affect the extraction step were investigated. A comparison between sonication and microwave procedures was done, and the influence of sample grinding was studied. In general, both extraction techniques led on comparable results, although sonication procedure needs to be more carefully optimised. Finally, as a final application of the optimised method, the effect of particle size on relative distribution of selected PAHs in the contaminated soil was investigated. Relative abundance of more volatile PAHs showed a decreasing according to lower grain size, while relative abundance of less volatile compounds indicated an increasing of concentration levels for lower grain size. (Author) 10 refs.

  15. Design of optimised backstepping controller for the synchronisation ...

    Indian Academy of Sciences (India)

    Ehsan Fouladi

    2017-12-18

    Dec 18, 2017 ... for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller. Keywords. Colpitts oscillator; backstepping controller; chaos synchronisation; shark smell algorithm; particle .... The velocity model is based on the gradient of the objective function, tilting ...

  16. Sizing Combined Heat and Power Units and Domestic Building Energy Cost Optimisation

    Directory of Open Access Journals (Sweden)

    Dongmin Yu

    2017-06-01

    Full Text Available Many combined heat and power (CHP units have been installed in domestic buildings to increase energy efficiency and reduce energy costs. However, inappropriate sizing of a CHP may actually increase energy costs and reduce energy efficiency. Moreover, the high manufacturing cost of batteries makes batteries less affordable. Therefore, this paper will attempt to size the capacity of CHP and optimise daily energy costs for a domestic building with only CHP installed. In this paper, electricity and heat loads are firstly used as sizing criteria in finding the best capacities of different types of CHP with the help of the maximum rectangle (MR method. Subsequently, the genetic algorithm (GA will be used to optimise the daily energy costs of the different cases. Then, heat and electricity loads are jointly considered for sizing different types of CHP and for optimising the daily energy costs through the GA method. The optimisation results show that the GA sizing method gives a higher average daily energy cost saving, which is 13% reduction compared to a building without installing CHP. However, to achieve this, there will be about 3% energy efficiency reduction and 7% input power to rated power ratio reduction compared to using the MR method and heat demand in sizing CHP.

  17. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi

    2012-09-04

    In this paper, shape optimisation of flapping wings in forward flight is considered. This analysis is performed by combining a local gradient-based optimizer with the unsteady vortex lattice method (UVLM). Although the UVLM applies only to incompressible, inviscid flows where the separation lines are known a priori, Persson et al. [1] showed through a detailed comparison between UVLM and higher-fidelity computational fluid dynamics methods for flapping flight that the UVLM schemes produce accurate results for attached flow cases and even remain trend-relevant in the presence of flow separation. As such, they recommended the use of an aerodynamic model based on UVLM to perform preliminary design studies of flapping wing vehicles Unlike standard computational fluid dynamics schemes, this method requires meshing of the wing surface only and not of the whole flow domain [2]. From the design or optimisation perspective taken in our work, it is fairly common (and sometimes entirely necessary, as a result of the excessive computational cost of the highest fidelity tools such as Navier-Stokes solvers) to rely upon such a moderate level of modelling fidelity to traverse the design space in an economical manner. The objective of the work, described in this paper, is to identify a set of optimised shapes that maximise the propulsive efficiency, defined as the ratio of the propulsive power over the aerodynamic power, under lift, thrust, and area constraints. The shape of the wings is modelled using B-splines, a technology used in the computer-aided design (CAD) field for decades. This basis can be used to smoothly discretize wing shapes with few degrees of freedom, referred to as control points. The locations of the control points constitute the design variables. The results suggest that changing the shape yields significant improvement in the performance of the flapping wings. The optimisation pushes the design to "bird-like" shapes with substantial increase in the time

  18. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    Science.gov (United States)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  19. Cultural-based particle swarm for dynamic optimisation problems

    Science.gov (United States)

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  20. Multicriteria Optimisation in Logistics Forwarder Activities

    Directory of Open Access Journals (Sweden)

    Tanja Poletan Jugović

    2007-05-01

    Full Text Available Logistics forwarder, as organizer and planner of coordinationand integration of all the transport and logistics chains elements,uses adequate ways and methods in the process of planningand decision-making. One of these methods, analysed inthis paper, which could be used in optimisation of transportand logistics processes and activities of logistics forwarder, isthe multicriteria optimisation method. Using that method, inthis paper is suggested model of multicriteria optimisation of logisticsforwarder activities. The suggested model of optimisationis justified in keeping with method principles of multicriteriaoptimization, which is included in operation researchmethods and it represents the process of multicriteria optimizationof variants. Among many different processes of multicriteriaoptimization, PROMETHEE (Preference Ranking OrganizationMethod for Enrichment Evaluations and Promcalc& Gaia V. 3.2., computer program of multicriteria programming,which is based on the mentioned process, were used.

  1. Optimisation of the energy efficiency of bread-baking ovens using a combined experimental and computational approach

    International Nuclear Information System (INIS)

    Khatir, Zinedine; Paton, Joe; Thompson, Harvey; Kapur, Nik; Toropov, Vassili

    2013-01-01

    Highlights: ► A scientific framework for optimising oven operating conditions is presented. ► Experiments measuring local convective heat transfer coefficient are undertaken. ► An energy efficiency model is developed with experimentally calibrated CFD analysis. ► Designing ovens with optimum heat transfer coefficients reduces energy use. ► Results demonstrate a strong case to design and manufacture energy optimised ovens. - Abstract: Changing legislation and rising energy costs are bringing the need for efficient baking processes into much sharper focus. High-speed air impingement bread-baking ovens are complex systems using air flow to transfer heat to the product. In this paper, computational fluid dynamics (CFD) is combined with experimental analysis to develop a rigorous scientific framework for the rapid generation of forced convection oven designs. A design parameterisation of a three-dimensional generic oven model is carried out for a wide range of oven sizes and flow conditions to optimise desirable features such as temperature uniformity throughout the oven, energy efficiency and manufacturability. Coupled with the computational model, a series of experiments measuring the local convective heat transfer coefficient (h c ) are undertaken. The facility used for the heat transfer experiments is representative of a scaled-down production oven where the air temperature and velocity as well as important physical constraints such as nozzle dimensions and nozzle-to-surface distance can be varied. An efficient energy model is developed using a CFD analysis calibrated using experimentally determined inputs. Results from a range of oven designs are presented together with ensuing energy usage and savings

  2. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez; Khan, Ayaz ul Hassan

    2014-01-01

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  3. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez

    2014-09-16

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  4. Smart meter deployment optimisation and its analysis for appliance load monitoring

    Directory of Open Access Journals (Sweden)

    Ahmed Shaharyar Khwaja

    2015-04-01

    Full Text Available In this study, the authors study the problem of smart meter deployment optimisation for appliance load monitoring, that is, to monitor a number of devices without any ambiguity using the minimum number of low-cost smart meters. The importance of this problem is due to the fact that the number of meters should be reduced to decrease the deployment cost, improve reliability and decrease congestion. In this way, in future, smart meters can provide additional information about the type and number of distinct devices connected, besides their normal functionalities concerned with providing overall energy measurements and their communication. The authors present two exact smart meter deployment optimisation algorithms, one based on exhaustive search and the other based on efficient implementation of the exhaustive search. They formulate the problem mathematically and present computational complexity analysis of their algorithms. Simulation scenarios show that for a typical number of home appliances, the efficient search method is significantly faster compared to the exhaustive search and can provide the same optimal solution. The authors also show the dependency of their method on the distribution of the load pattern that can potentially be in a typical household.

  5. Statistical methods towards more efficient infiltration measurements.

    Science.gov (United States)

    Franz, T; Krebs, P

    2006-01-01

    A comprehensive knowledge about the infiltration situation in a catchment is required for operation and maintenance. Due to the high expenditures, an optimisation of necessary measurement campaigns is essential. Methods based on multivariate statistics were developed to improve the information yield of measurements by identifying appropriate gauge locations. The methods have a high degree of freedom against data needs. They were successfully tested on real and artificial data. For suitable catchments, it is estimated that the optimisation potential amounts up to 30% accuracy improvement compared to nonoptimised gauge distributions. Beside this, a correlation between independent reach parameters and dependent infiltration rates could be identified, which is not dominated by the groundwater head.

  6. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    Science.gov (United States)

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. PHYSICAL-MATEMATICALSCIENCE MECHANICS SIMULATION CHALLENGES IN OPTIMISING THEORETICAL METAL CUTTING TASKS

    Directory of Open Access Journals (Sweden)

    Rasul V. Guseynov

    2017-01-01

    Full Text Available Abstract. Objectives In the article, problems in the optimising of machining operations, which provide end-unit production of the required quality with a minimum processing cost, are addressed. Methods Increasing the effectiveness of experimental research was achieved through the use of mathematical methods for planning experiments for optimising metal cutting tasks. The minimal processing cost model, in which the objective function is polynomial, is adopted as a criterion for the selection of optimal parameters. Results Polynomial models of the influence of angles φ, α, γ on the torque applied when cutting threads in various steels are constructed. Optimum values of the geometrical tool parameters were obtained using the criterion of minimum cutting forces during processing. The high stability of tools having optimal geometric parameters is determined. It is shown that the use of experimental planning methods allows the optimisation of cutting parameters. In optimising solutions to metal cutting problems, it is found to be expedient to use multifactor experimental planning methods and to select the cutting force as the optimisation parameter when determining tool geometry. Conclusion The joint use of geometric programming and experiment planning methods in order to optimise the parameters of cutting significantly increases the efficiency of technological metal processing approaches. 

  8. Modulation aware cluster size optimisation in wireless sensor networks

    Science.gov (United States)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  9. Optimal design of CHP-based microgrids: Multiobjective optimisation and life cycle assessment

    International Nuclear Information System (INIS)

    Zhang, Di; Evangelisti, Sara; Lettieri, Paola; Papageorgiou, Lazaros G.

    2015-01-01

    As an alternative to current centralised energy generation systems, microgrids are adopted to provide local energy with lower energy expenses and gas emissions by utilising distributed energy resources (DER). Several micro combined heat and power technologies have been developed recently for applications at domestic scale. The optimal design of DERs within CHP-based microgrids plays an important role in promoting the penetration of microgrid systems. In this work, the optimal design of microgrids with CHP units is addressed by coupling environmental and economic sustainability in a multi-objective optimisation model which integrates the results of a life cycle assessment of the microgrids investigated. The results show that the installation of multiple CHP technologies has a lower cost with higher environmental saving compared with the case when only a single technology is installed in each site, meaning that the microgrid works in a more efficient way when multiple technologies are selected. In general, proton exchange membrane (PEM) fuel cells are chosen as the basic CHP technology for most solutions, which offers lower environmental impacts at low cost. However, internal combustions engines (ICE) and Stirling engines (SE) are preferred if the heat demand is high. - Highlights: • Optimal design of microgrids is addressed by coupling environmental and economic aspects. • An MILP model is formulated based on the ε-constraint method. • The model selects a combination of CHP technologies with different technical characteristics for optimum scenarios. • The global warming potential (GWP) and the acidification potential (AP) are determined. • The output of LCA is used as an input for the optimisation model

  10. ICT for whole life optimisation of residential buildings

    Energy Technology Data Exchange (ETDEWEB)

    Haekkinen, T.; Vares, S.; Huovila, P.; Vesikari, E.; Porkka, J. (VTT Technical Research Centre of Finland, Espoo (FI)); Nilsson, L.-O.; Togeroe, AA. (Lund University (SE)); Jonsson, C.; Suber, K. (Skanska Sverige AB (SE)); Andersson, R.; Larsson, R. (Cementa, Malmoe (SE)); Nuorkivi, I. (Skanska Oyj, Helsinki (FI))

    2007-08-15

    The research project 'ICT for whole life optimisation of residential buildings' (ICTWLORB) developed and tested the whole life design and optimisation methods for residential buildings. The objective of the ICTWLORB project was to develop and implement an ICT based tool box for integrated life cycle design of residential building. ICTWLORB was performed in cooperation with Swedish and Finnish partners. The ICTWLORB project defined as a premise that an industrialised building process is characterised by two main elements: a building concept based approach and efficient information management. Building concept based approach enables (1) the product development of the end product, (2) repetition of the basic elements of the building from one project to others and (3) customisation of the end-product considering the specific needs of the case and the client. Information management enables (1) the consideration of wide spectrum of aspects including building performance, environmental aspects, life cycle costs and service life, and (2) rapid adapting of the design to the specific requirements of the case. (orig.)

  11. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    Science.gov (United States)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  12. Optimisation of Oil Production in Two – Phase Flow Reservoir Using Simultaneous Method and Interior Point Optimiser

    DEFF Research Database (Denmark)

    Lerch, Dariusz Michal; Völcker, Carsten; Capolei, Andrea

    2012-01-01

    in the reservoir. A promising decrease of these remained resources can be provided by smart wells applying water injections to sustain satisfactory pressure level in the reservoir throughout the whole process of oil production. Basically to enhance secondary recovery of the remaining oil after drilling, water...... is injected at the injection wells of the down-hole pipes. This sustains the pressure in the reservoir and drives oil towards production wells. There are however, many factors contributing to the poor conventional secondary recovery methods e.g. strong surface tension, heterogeneity of the porous rock...... fields, or closed loop optimisation, can be used for optimising the reservoir performance in terms of net present value of oil recovery or another economic objective. In order to solve an optimal control problem we use a direct collocation method where we translate a continuous problem into a discrete...

  13. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    Science.gov (United States)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  14. Classification of osteoporosis by artificial neural network based on monarch butterfly optimisation algorithm.

    Science.gov (United States)

    Devikanniga, D; Joshua Samuel Raj, R

    2018-04-01

    Osteoporosis is a life threatening disease which commonly affects women mostly after their menopause. It primarily causes mild bone fractures, which on advanced stage leads to the death of an individual. The diagnosis of osteoporosis is done based on bone mineral density (BMD) values obtained through various clinical methods experimented from various skeletal regions. The main objective of the authors' work is to develop a hybrid classifier model that discriminates the osteoporotic patient from healthy person, based on BMD values. In this Letter, the authors propose the monarch butterfly optimisation-based artificial neural network classifier which helps in earlier diagnosis and prevention of osteoporosis. The experiments were conducted using 10-fold cross-validation method for two datasets lumbar spine and femoral neck. The results were compared with other similar hybrid approaches. The proposed method resulted with the accuracy, specificity and sensitivity of 97.9% ± 0.14, 98.33% ± 0.03 and 95.24% ± 0.08, respectively, for lumbar spine dataset and 99.3% ± 0.16%, 99.2% ± 0.13 and 100, respectively, for femoral neck dataset. Further, its performance is compared using receiver operating characteristics analysis and Wilcoxon signed-rank test. The results proved that the proposed classifier is efficient and it outperformed the other approaches in all the cases.

  15. Real-time optimisation of the Hoa Binh reservoir, Vietnam

    DEFF Research Database (Denmark)

    Richaud, Bertrand; Madsen, Henrik; Rosbjerg, Dan

    2011-01-01

    -time optimisation. First, the simulation-optimisation framework is applied for optimising reservoir operating rules. Secondly, real-time and forecast information is used for on-line optimisation that focuses on short-term goals, such as flood control or hydropower generation, without compromising the deviation...... in the downstream part of the Red River, and at the same time to increase hydropower generation and to save water for the dry season. The real-time optimisation procedure further improves the efficiency of the reservoir operation and enhances the flexibility for the decision-making. Finally, the quality......Multi-purpose reservoirs often have to be managed according to conflicting objectives, which requires efficient tools for trading-off the objectives. This paper proposes a multi-objective simulation-optimisation approach that couples off-line rule curve optimisation with on-line real...

  16. Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  17. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    International Nuclear Information System (INIS)

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  18. Module detection in complex networks using integer optimisation

    Directory of Open Access Journals (Sweden)

    Tsoka Sophia

    2010-11-01

    Full Text Available Abstract Background The detection of modules or community structure is widely used to reveal the underlying properties of complex networks in biology, as well as physical and social sciences. Since the adoption of modularity as a measure of network topological properties, several methodologies for the discovery of community structure based on modularity maximisation have been developed. However, satisfactory partitions of large graphs with modest computational resources are particularly challenging due to the NP-hard nature of the related optimisation problem. Furthermore, it has been suggested that optimising the modularity metric can reach a resolution limit whereby the algorithm fails to detect smaller communities than a specific size in large networks. Results We present a novel solution approach to identify community structure in large complex networks and address resolution limitations in module detection. The proposed algorithm employs modularity to express network community structure and it is based on mixed integer optimisation models. The solution procedure is extended through an iterative procedure to diminish effects that tend to agglomerate smaller modules (resolution limitations. Conclusions A comprehensive comparative analysis of methodologies for module detection based on modularity maximisation shows that our approach outperforms previously reported methods. Furthermore, in contrast to previous reports, we propose a strategy to handle resolution limitations in modularity maximisation. Overall, we illustrate ways to improve existing methodologies for community structure identification so as to increase its efficiency and applicability.

  19. Numerical Analysis and Geometry Optimisation of Vertical Vane of Room Air-conditioner

    Directory of Open Access Journals (Sweden)

    Al-Obaidi Abdulkareem Sh. Mahdi

    2018-01-01

    Full Text Available Vertical vanes of room air-conditioners are used to control and direct cold air. This paper aims to study vertical vane as one of the parameters that affect the efficiency of dissipating cold air to a given space. The vertical vane geometry is analysed and optimised for lower production cost using CFD. The optimised geometry of the vertical vane should have the same or increased efficiency of dissipating cold air and have lesser mass compared to the existing original design. The existing original design of vertical vane is simplified and analysed by using ANSYS Fluent. Efficiency of wind direction is define as how accurate the direction of airflow coming out from vertical vane. In order to calculate the efficiency of wind direction, 15° and 30° rotation of vertical vane inside room air-conditioner are simulated. The efficiency of wind direction for 15° rotation of vertical vane is 57.81% while efficiency of wind direction for 30° rotation of vertical vane is 47.54%. The results of the efficiency of wind direction are used as base reference for parametric study. The parameters investigated for optimisation of vertical vane are focused at length of long span, tip chord and short span. The design of 15% decreased in vane surface area at tip chord is the best optimised design of vertical vane because the efficiency of wind direction is the highest as 60.32%.

  20. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    Science.gov (United States)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  1. Alternatives for optimisation of rumen fermentation in ruminants

    Directory of Open Access Journals (Sweden)

    T. Slavov

    2017-06-01

    Full Text Available Abstract. The proper knowledge on the variety of events occurring in the rumen makes possible their optimisation with respect to the complete feed conversion and increasing the productive performance of ruminants. The inclusion of various dietary additives (supplements, biologically active substances, nutritional antibiotics, probiotics, enzymatic preparations, plant extracts etc. has an effect on the intensity and specific pathway of fermentation, and thus, on the general digestion and systemic metabolism. The optimisation of rumen digestion is a method with substantial potential for improving the efficiency of ruminant husbandry, increasing of quality of their produce and health maintenance.

  2. Optimizing the gear efficiency under consideration of thermal optimisation strategies and customer-specific load conditions; Optimierung des Getriebewirkungsgrads unter Beruecksichtigung thermischer Optimierungsstrategien und kundenspezifischer Lastkollektive

    Energy Technology Data Exchange (ETDEWEB)

    Inderwisch, Kathrien; Kuecuekay, Ferit [Technische Univ. Braunschweig (Germany). Inst. fuer Fahrzeugtechnik

    2012-11-01

    Nowadays, the automotive industry have been received more attention to improve the transmission efficiency. Most of the researches have been concentrated on development and optimisation on transmission actuators, shifting elements, bearings, lubricants or lightweight constructions. Due to the low load requirements and associated low efficiencies for transmissions in driving cycles the transmissions cause energy losses which cannot be neglected. Two main stategies can be followed up for the optimisation of transmission efficiency. At first the efficiency benefit of transmissions through optimisation of hardware components will be presented. The second possibility is the representation of an optimal thermal management especially at low temperatures. Warming-up the transmission oil or transmission components can increase the efficiency of transmissions significantly. Techniques like this become more important in the course of electrification of drive trains and therefore decreased availability of heat. A simulation tool for calculation and minimisation of power loss for manual and dual-clutch transmissions was developed at the Institute of Automotive Engineering and verified by measurements. The simulation tool calculates the total transmission efficiency as well as the losses of individual transmission components depending on various environmental conditions. In this paper, the results in terms of increasing the efficiency of transmissions by optimisation of hardware components will be presented. Furthermore, the effects of temperature distribution in the transmission as well as the potential of minimising loss at low temperatures through thermal management will be illustrated. (orig.)

  3. Optimisation of timetable-based, stochastic transit assignment models based on MSA

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr

    2006-01-01

    (CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...

  4. ExRET-Opt: An automated exergy/exergoeconomic simulation framework for building energy retrofit analysis and design optimisation

    International Nuclear Information System (INIS)

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul; Morillón Gálvez, David

    2017-01-01

    Highlights: • Development of a building retrofit-oriented exergoeconomic-based optimisation tool. • A new exergoeconomic cost-benefit indicator is developed for design comparison. • Thermodynamic and thermal comfort variables used as constraints and/or objectives. • Irreversibilities and exergetic cost for end-use processes are substantially reduced. • Robust methodology that should be pursued in everyday building retrofit practice. - Abstract: Energy simulation tools have a major role in the assessment of building energy retrofit (BER) measures. Exergoeconomic analysis and optimisation is a common practice in sectors such as the power generation and chemical processes, aiding engineers to obtain more energy-efficient and cost-effective energy systems designs. ExRET-Opt, a retrofit-oriented modular-based dynamic simulation framework has been developed by embedding a comprehensive exergy/exergoeconomic calculation method into a typical open-source building energy simulation tool (EnergyPlus). The aim of this paper is to show the decomposition of ExRET-Opt by presenting modules, submodules and subroutines used for the framework’s development as well as verify the outputs with existing research data. In addition, the possibility to perform multi-objective optimisation analysis based on genetic-algorithms combined with multi-criteria decision making methods was included within the simulation framework. This addition could potentiate BER design teams to perform quick exergy/exergoeconomic optimisation, in order to find opportunities for thermodynamic improvements along the building’s active and passive energy systems. The enhanced simulation framework is tested using a primary school building as a case study. Results demonstrate that the proposed simulation framework provide users with thermodynamic efficient and cost-effective designs, even under tight thermodynamic and economic constraints, suggesting its use in everyday BER practice.

  5. Optimised Renormalisation Group Flows

    CERN Document Server

    Litim, Daniel F

    2001-01-01

    Exact renormalisation group (ERG) flows interpolate between a microscopic or classical theory and the corresponding macroscopic or quantum effective theory. For most problems of physical interest, the efficiency of the ERG is constrained due to unavoidable approximations. Approximate solutions of ERG flows depend spuriously on the regularisation scheme which is determined by a regulator function. This is similar to the spurious dependence on the ultraviolet regularisation known from perturbative QCD. Providing a good control over approximated ERG flows is at the root for reliable physical predictions. We explain why the convergence of approximate solutions towards the physical theory is optimised by appropriate choices of the regulator. We study specific optimised regulators for bosonic and fermionic fields and compare the optimised ERG flows with generic ones. This is done up to second order in the derivative expansion at both vanishing and non-vanishing temperature. An optimised flow for a ``proper-time ren...

  6. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  7. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  8. Optimising the operation of hybrid coolers by means of efficient control equipment; Optimierung der Betriebsweise von Hybridkuehlern durch effiziente Steuerungstechnik

    Energy Technology Data Exchange (ETDEWEB)

    Odrich, T.; Koenig, H. [Jaeggi/Guentner (Schweiz) AG, Trimbach (Switzerland)

    2007-07-01

    Due to its functional principle and design, the hybrid dry cooler holds a high potential for saving water and energy. Purely convective heat discharge during dry operation in the case of a high annual rate of utilisation, evaporative cooling during the wetting cycle at peak load times or high ambient temperatures and infinitely adjustable fan speed in both operating modes permit a very substantial recooling performance at low operating costs and with little space requirement. However, the efficiency of hybrid dry coolers depends to a large degree on how intelligently the cooling functions are controlled and on the control strategy. The present article demonstrates that the control strategy contributes decisively to minimising water and energy consumption and costs. Besides describing the actual functions of a hybrid cooler control system it presents a control strategy for automatic lowering of the setpoint and hence optimisation of the refrigeration process. It discusses the option of operating multiple hybrid coolers by means of a hydraulic network and presents an optimised control concept for this purpose which is based on a master control unit. In conclusion the study shows that hybrid coolers need their own optimised control unit if maximum savings in energy and water are to be achieved.

  9. Dose optimisation in single plane interstitial brachytherapy

    DEFF Research Database (Denmark)

    Tanderup, Kari; Hellebust, Taran Paulsen; Honoré, Henriette Benedicte

    2006-01-01

    patients,       treated for recurrent rectal and cervical cancer, flexible catheters were       sutured intra-operatively to the tumour bed in areas with compromised       surgical margin. Both non-optimised, geometrically and graphically       optimised CT -based dose plans were made. The overdose index...... on the       regularity of the implant, such that the benefit of optimisation was       larger for irregular implants. OI and HI correlated strongly with target       volume limiting the usability of these parameters for comparison of dose       plans between patients. CONCLUSIONS: Dwell time optimisation significantly......BACKGROUND AND PURPOSE: Brachytherapy dose distributions can be optimised       by modulation of source dwell times. In this study dose optimisation in       single planar interstitial implants was evaluated in order to quantify the       potential benefit in patients. MATERIAL AND METHODS: In 14...

  10. Optimisation of logistics processes of energy grass collection

    Science.gov (United States)

    Bányai, Tamás.

    2010-05-01

    objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social

  11. Finite Element Optimised Back Analysis of In Situ Stress Field and Stability Analysis of Shaft Wall in the Underground Gas Storage

    Directory of Open Access Journals (Sweden)

    Yifei Yan

    2016-01-01

    Full Text Available A novel optimised back analysis method is proposed in this paper. The in situ stress field of an underground gas storage (UGS reservoir in a Turkey salt cavern is analysed by the basic theory of elastic mechanics. A finite element method is implemented to optimise and approximate the objective function by systematically adjusting boundary loads. Optimising calculation is performed based on a novel method to reduce the error between measurement and calculation as much as possible. Compared with common back analysis methods such as regression method, the method proposed can further improve the calculation precision. By constructing a large circular geometric model, the effect of stress concentration is eliminated and a minimum difference between computed and measured stress can be guaranteed in the rectangular objective region. The efficiency of the proposed method is investigated and confirmed by its capability on restoring in situ stress field, which agrees well with experimental results. The characteristics of stress distribution of chosen UGS wells are obtained based on the back analysis results and by applying the corresponding fracture criterion, the shaft walls are proven safe.

  12. Energy balance of the optimised CVT-hybrid-driveline

    Energy Technology Data Exchange (ETDEWEB)

    Hoehn, Bernd-Robert; Pflaum, Hermann; Lechner, Claus [Forschungsstelle fuer Zahnraeder und Getriebebau, Technische Univ. Muenchen, Garching (Germany)

    2009-07-01

    Funded by the DFG (German Research Foundation) and some industry partners like GM Powertrain Europe, ZF, EPCOS the Optimised CVT-Hybrid was developed at Technische Universitaet Muenchen in close collaboration with the industry and is currently under scientific investigation. Designed as a parallel hybrid vehicle the Optimised CVT-Hybrid combines a series-production diesel engine with a small electric motor. The core element of the driveline is a two range continuously variable transmission (i{radical}i-transmission), which is based on a chain variator. By a special shifting process without interruption of traction force the ratio range of the chain variator is used twice; thereby a wide transmission-ratio spread is achieved by low complexity. Thus the transmission provides a large pull-away ratio for the small electric motor and a fuel-efficient overdrive ratio for the ic-engine. Instead of heavy and space-consuming accumulators a small efficient package of double layer capacitors (UltraCaps) is used for electric energy and power storage. The driveline management is done by an optimised vehicle controller. Within the scope of the research project two prototype drivelines were manufactured. One driveline is integrated into an Opel Vectra Caravan and is available for investigations at the roller dynamometer and in the actual road traffic. The second hybrid driveline is assembled at the powertrain test rig of the FZG for detailed analysis of system behaviour and fuel consumption. Based on measurements of standardised driving cycles system behaviour, fuel consumption and a detailed energy balance of the Optimised CVT-Hybrid are presented. In comparison to the series-production vehicle the fuel savings are shown. (orig.)

  13. Preconditioned stochastic gradient descent optimisation for monomodal image registration

    NARCIS (Netherlands)

    Klein, S.; Staring, M.; Andersson, J.P.; Pluim, J.P.W.; Fichtinger, G.; Martel, A.; Peters, T.

    2011-01-01

    We present a stochastic optimisation method for intensity-based monomodal image registration. The method is based on a Robbins-Monro stochastic gradient descent method with adaptive step size estimation, and adds a preconditioning matrix. The derivation of the pre-conditioner is based on the

  14. Optimisation of synergistic biomass-degrading enzyme systems for efficient rice straw hydrolysis using an experimental mixture design.

    Science.gov (United States)

    Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat

    2012-09-01

    Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Optimisation of preparation conditions and properties of phytosterol liposome-encapsulating nattokinase.

    Science.gov (United States)

    Dong, Xu-Yan; Kong, Fan-Pi; Yuan, Gang-You; Wei, Fang; Jiang, Mu-Lan; Li, Guang-Ming; Wang, Zhan; Zhao, Yuan-Di; Chen, Hong

    2012-01-01

    Phytosterol liposomes were prepared using the thin film method and used to encapsulate nattokinase (NK). In order to obtain a high encapsulation efficiency within the liposome, an orthogonal experiment (L9 (3)(4)) was applied to optimise the preparation conditions. The molar ratio of lecithin to phytosterols, NK activity and mass ratio of mannite to lecithin were the main factors that influenced the encapsulation efficiency of the liposomes. Based on the results of a single-factor test, these three factors were chosen for this study. We determined the optimum extraction conditions to be as follows: a molar ratio of lecithin to phytosterol of 2 : 1, NK activity of 2500 U mL⁻¹ and a mass ratio of mannite to lecithin of 3 : 1. Under these optimised conditions, an encapsulation efficiency of 65.25% was achieved, which agreed closely with the predicted result. Moreover, the zeta potential, size distribution and microstructure of the liposomes prepared were measured, and we found that the zeta potential was -51 ± 3 mV and the mean diameter was 194.1 nm. From the results of the scanning electron microscopy, we observed that the phytosterol liposomes were round and regular in shape and showed no aggregation.

  16. Optimising parallel R correlation matrix calculations on gene expression data using MapReduce.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Johnson, David; Emam, Ibrahim; Guitton, Florian; Oehmichen, Axel; Guo, Yike

    2014-11-05

    High-throughput molecular profiling data has been used to improve clinical decision making by stratifying subjects based on their molecular profiles. Unsupervised clustering algorithms can be used for stratification purposes. However, the current speed of the clustering algorithms cannot meet the requirement of large-scale molecular data due to poor performance of the correlation matrix calculation. With high-throughput sequencing technologies promising to produce even larger datasets per subject, we expect the performance of the state-of-the-art statistical algorithms to be further impacted unless efforts towards optimisation are carried out. MapReduce is a widely used high performance parallel framework that can solve the problem. In this paper, we evaluate the current parallel modes for correlation calculation methods and introduce an efficient data distribution and parallel calculation algorithm based on MapReduce to optimise the correlation calculation. We studied the performance of our algorithm using two gene expression benchmarks. In the micro-benchmark, our implementation using MapReduce, based on the R package RHIPE, demonstrates a 3.26-5.83 fold increase compared to the default Snowfall and 1.56-1.64 fold increase compared to the basic RHIPE in the Euclidean, Pearson and Spearman correlations. Though vanilla R and the optimised Snowfall outperforms our optimised RHIPE in the micro-benchmark, they do not scale well with the macro-benchmark. In the macro-benchmark the optimised RHIPE performs 2.03-16.56 times faster than vanilla R. Benefiting from the 3.30-5.13 times faster data preparation, the optimised RHIPE performs 1.22-1.71 times faster than the optimised Snowfall. Both the optimised RHIPE and the optimised Snowfall successfully performs the Kendall correlation with TCGA dataset within 7 hours. Both of them conduct more than 30 times faster than the estimated vanilla R. The performance evaluation found that the new MapReduce algorithm and its

  17. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Science.gov (United States)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  18. Optimisation of Storage for Concentrated Solar Power Plants

    Directory of Open Access Journals (Sweden)

    Luigi Cirocco

    2014-12-01

    Full Text Available The proliferation of non-scheduled generation from renewable electrical energy sources such concentrated solar power (CSP presents a need for enabling scheduled generation by incorporating energy storage; either via directly coupled Thermal Energy Storage (TES or Electrical Storage Systems (ESS distributed within the electrical network or grid. The challenges for 100% renewable energy generation are: to minimise capitalisation cost and to maximise energy dispatch capacity. The aims of this review article are twofold: to review storage technologies and to survey the most appropriate optimisation techniques to determine optimal operation and size of storage of a system to operate in the Australian National Energy Market (NEM. Storage technologies are reviewed to establish indicative characterisations of energy density, conversion efficiency, charge/discharge rates and costings. A partitioning of optimisation techniques based on methods most appropriate for various time scales is performed: from “whole of year”, seasonal, monthly, weekly and daily averaging to those best suited matching the NEM bid timing of five minute dispatch bidding, averaged on the half hour as the trading settlement spot price. Finally, a selection of the most promising research directions and methods to determine the optimal operation and sizing of storage for renewables in the grid is presented.

  19. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  20. A fuzzy-based particle swarm optimisation approach for task assignment in home healthcare

    Directory of Open Access Journals (Sweden)

    Mutingi, Michael

    2014-11-01

    Full Text Available Home healthcare (HHC organisations provide coordinated healthcare services to patients at their homes. Motivated by the ever-increasing need for home-based care, the assignment of tasks to available healthcare staff is a common and complex problem in homecare organisations. Designing high quality task schedules is critical for improving worker morale, job satisfaction, service efficiency, service quality, and competitiveness over the long term. The desire is to provide high quality task assignment schedules that satisfy the patient, the care worker, and the management. This translates to maximising schedule fairness in terms of workload assignments, avoiding task time window violation, and meeting management goals as much as possible. However, in practice, these desires are often subjective as they involve imprecise human perceptions. This paper develops a fuzzy multi-criteria particle swarm optimisation (FPSO approach for task assignment in a home healthcare setting in a fuzzy environment. The proposed approach uses a fuzzy evaluation method from a multi-criteria point of view. Results from illustrative computational experiments show that the approach is promising.

  1. Layout Optimisation of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Pau Mercadé Ruiz

    2017-08-01

    Full Text Available This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA, a genetic algorithm (GA and the glowworm swarm optimisation (GSO algorithm. The results show slightly higher performances for the latter two algorithms; however, the first turns out to be significantly less computationally demanding.

  2. Optimal Optimisation in Chemometrics

    NARCIS (Netherlands)

    Hageman, J.A.

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  3. Optimisation of expansion liquefaction processes using mixed refrigerant N_2–CH_4

    International Nuclear Information System (INIS)

    Ding, He; Sun, Heng; He, Ming

    2016-01-01

    Highlights: • A refrigerant composition matching method for N_2–CH_4 expansion processes. • Efficiency improvements for propane pre-cooled N_2–CH_4 expansion processes. • The process shows good adaptability to varying natural gas compositions. - Abstract: An expansion process with a pre-cooling system is simulated and optimised by Aspen HYSYS and MATLAB"™. Taking advantage of higher specific refrigeration effect of methane and easily reduced refrigeration temperature of nitrogen, the designed process adopts N_2–CH_4 as a mixed refrigerant. Based on the different thermodynamic properties and sensitivity difference of N_2 and CH_4 over the same heat transfer temperature range, this work proposes a novel method of matching refrigerant composition which aims at single-stage or multi-stage series expansion liquefaction processes with pre-cooling systems. This novel method is applied successfully in propane pre-cooled N_2–CH_4 expansion process, and the unit power consumption is reduced to 7.09 kWh/kmol, which is only 5.35% higher than the global optimised solutions obtained by genetic algorithm. This novel method can fulfil the accomplishments of low energy consumption and high liquefaction rate, and thus decreases the gap between the mixed refrigerant and expansion processes in energy consumption. Furthermore, the high exergy efficiency of the process indicates good adaptability to varying natural gas compositions.

  4. Process and Economic Optimisation of a Milk Processing Plant with Solar Thermal Energy

    DEFF Research Database (Denmark)

    Bühler, Fabian; Nguyen, Tuong-Van; Elmegaard, Brian

    2016-01-01

    . Based on the case study of a dairy factory, where first a heat integration is performed to optimise the system, a model for solar thermal process integration is developed. The detailed model is based on annual hourly global direct and diffuse solar radiation, from which the radiation on a defined......This work investigates the integration of solar thermal systems for process energy use. A shift from fossil fuels to renewable energy could be beneficial both from environmental and economic perspectives, after the process itself has been optimised and efficiency measures have been implemented...... surface is calculated. Based on hourly process stream data from the dairy factory, the optimal streams for solar thermal process integration are found, with an optimal thermal storagetank volume. The last step consists of an economic optimisation of the problem to determine the optimal size...

  5. Particle swarm optimisation classical and quantum perspectives

    CERN Document Server

    Sun, Jun; Wu, Xiao-Jun

    2016-01-01

    IntroductionOptimisation Problems and Optimisation MethodsRandom Search TechniquesMetaheuristic MethodsSwarm IntelligenceParticle Swarm OptimisationOverviewMotivationsPSO Algorithm: Basic Concepts and the ProcedureParadigm: How to Use PSO to Solve Optimisation ProblemsSome Harder Examples Some Variants of Particle Swarm Optimisation Why Does the PSO Algorithm Need to Be Improved? Inertia and Constriction-Acceleration Techniques for PSOLocal Best ModelProbabilistic AlgorithmsOther Variants of PSO Quantum-Behaved Particle Swarm Optimisation OverviewMotivation: From Classical Dynamics to Quantum MechanicsQuantum Model: Fundamentals of QPSOQPSO AlgorithmSome Essential ApplicationsSome Variants of QPSOSummary Advanced Topics Behaviour Analysis of Individual ParticlesConvergence Analysis of the AlgorithmTime Complexity and Rate of ConvergenceParameter Selection and PerformanceSummaryIndustrial Applications Inverse Problems for Partial Differential EquationsInverse Problems for Non-Linear Dynamical SystemsOptimal De...

  6. Suitability of monitoring methods for the optimisation of Radiological Protection in the case of internal exposure through inhalation

    International Nuclear Information System (INIS)

    Degrange, J.P.; Gibert, B.; Basire, D.

    2000-01-01

    The radiological protection system recommended by the International Commission for Radiological Protection (ICRP) for justified practices relied pn the limitation and optimisation principles. The monitoring of internal exposure is most often based on the periodic assessment of individual exposure in order to essentially insure the simple compliance with the annual dose limits. Optimisation of protection implies a realistic, sensitive and analytical assessment of individual and collective exposures in order to allow the indentification of the main sources of exposure (main sources of contamination, most exposed operators, work activities contributing the most to the exposure) and the selection of the optimal protection options. Therefore the monitoring methods must allow the realistic assessment of individual dose levels far lower than annual limits together with measurements as frequent as possible. The aim of this presentation is to discuss the ability of various monitoring methods (collective and individual air sampling, in vivo and in vitro bioassays) to fulfil those needs. This discussion is illustrated by the particular case of the internal exposure to natural uranium compounds through inhalation. Firstly, the sensitivity and the degree to which each monitoring method is realistic are quantified and discussed on the basis of the application of the new ICRP dosimetric model, and their analytical capability for the optimisation of radiological protection is then indicated. Secondly, a case study is presented which shows the capability of individual air sampling techniques to analyse the exposure of the workers and the inadequacy of static air sampling to accurately estimate the exposures when contamination varies significantly over time and space in the workstations. As far as exposure to natural uranium compounds through inhalation is concerned, the study for assessing the sensitivity, analytic ability and accuracy of the different measuring systems shows that

  7. Energy-optimised mini-refrigerators; Energieoptimierter Minikuehlschrank. Massnahmen zur Optimierung der Energieeffizienz von Minikuehlschraenken

    Energy Technology Data Exchange (ETDEWEB)

    Burri, A.

    2007-10-15

    This illustrated final report for the Swiss Federal Office of Energy (SFOE) takes a look at measures to be taken to optimise the energy-efficiency of small refrigerators. Such devices are typically to be found in hotel rooms and on boats as well as in caravans and motor homes. The majority of these mini-refrigerators use the absorption principle for cooling. Although less efficient than their compressor-driven counterparts, absorption refrigerators satisfy market requirements and customer wishes with regard to noiseless and maintenance-free operation. The report discusses how optimisation of the absorption principle could lead to energy savings in the long term. The operating principles and energy balances of such refrigerators are discussed and a market overview is presented. Energy consumption of the refrigerators, possible savings and energy costs are discussed. The advantages and disadvantages of various cooling systems are examined as are further possibilities for making savings such as the optimisation of sizing, installation methods, airflow factors and operating temperatures.

  8. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    Science.gov (United States)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  9. Multiobjective optimisation of energy systems and building envelope retrofit in a residential community

    International Nuclear Information System (INIS)

    Wu, Raphael; Mavromatidis, Georgios; Orehounig, Kristina; Carmeliet, Jan

    2017-01-01

    Highlights: • Simultaneous optimisation of building envelope retrofit and energy systems. • Retrofit and energy systems change interact and should be considered simultaneously. • Case study quantifies cost-GHG emission tradeoffs for different retrofit options. - Abstract: In this paper, a method for a multi-objective and simultaneous optimisation of building energy systems and retrofit is presented. Tailored to be suitable for the diverse range of existing buildings in terms of age, size, and use, it combines dynamic energy demand simulation to explore individual retrofit scenarios with an energy hub optimisation. Implemented as an epsilon-constrained mixed integer linear program (MILP), the optimisation matches envelope retrofit with renewable and high efficiency energy supply technologies such as biomass boilers, heat pumps, photovoltaic and solar thermal panels to minimise life cycle cost and greenhouse gas (GHG) emissions. Due to its multi-objective, integrated assessment of building transformation options and its ability to capture both individual building characteristics and trends within a neighbourhood, this method is aimed to provide developers, neighbourhood and town policy makers with the necessary information to make adequate decisions. Our method is deployed in a case study of typical residential buildings in the Swiss village of Zernez, simulating energy demands in EnergyPlus and solving the optimisation problem with CPLEX. Although common trade-offs in energy system and retrofit choice can be observed, optimisation results suggest that the diversity in building age and size leads to optimal strategies for retrofitting and building system solutions, which are specific to different categories. With this method, GHG emissions of the entire community can be reduced by up to 76% at a cost increase of 3% compared to the current emission levels, if an optimised solution is selected for each building category.

  10. Study on highly efficient seismic data acquisition and processing methods based on sparsity constraint

    Science.gov (United States)

    Wang, H.; Chen, S.; Tao, C.; Qiu, L.

    2017-12-01

    High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing

  11. How to apply the Score-Function method to standard discrete event simulation tools in order to optimise a set of system parameters simultaneously: A Job-Shop example will be discussed

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ...

  12. Crystal structure optimisation using an auxiliary equation of state

    Science.gov (United States)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  13. Crystal structure optimisation using an auxiliary equation of state

    International Nuclear Information System (INIS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; 3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of))" data-affiliation=" (Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Global E3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of))" >Walsh, Aron

    2015-01-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy–volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other “beyond” density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu 2 ZnSnS 4 and the magnetic metal-organic framework HKUST-1

  14. Crystal structure optimisation using an auxiliary equation of state

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T. [Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Walsh, Aron, E-mail: a.walsh@bath.ac.uk [Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Global E" 3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of)

    2015-11-14

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy–volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other “beyond” density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu{sub 2}ZnSnS{sub 4} and the magnetic metal-organic framework HKUST-1.

  15. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  16. Results of the 2010 IGSC Topical Session on Optimisation

    International Nuclear Information System (INIS)

    Bailey, Lucy

    2014-01-01

    topical session reflected the diversity of optimisation goals that may be pursued in the framework of a geological disposal programme. While optimisation of protection, as defined by ICRP, is regarded as a process to keep the magnitude of individual doses, the number of people exposed, and the likelihood of potential exposure as low as reasonably achievable with economic and social factors being taken into account, optimisation can also be seen as a way of increasing the technical quality and robustness of the whole waste management process. An optimal solution means addressing safety requirements whilst balancing other factors such as the need to use resources efficiently, political and acceptance issues and any other boundary conditions imposed by society. It was noted that optimisation variables are not well defined and could be quite programme-specific. However, the discussion showed a lot of agreement and consensus of views. In particular, the summary noted general agreement on the following points: - Optimisation is a process that can be checked and reviewed and needs to be transparent. Optimisation is therefore a learning process, and as such can contribute to building confidence in the safety case by the demonstration of ongoing learning across the organisation. - Optimisation occurs at each stage of the disposal facility development programme, and is therefore forward looking rather than focussed on re-examining past decisions. Optimisation should be about the right way forward at each stage, making the best decisions to move forward from the present situation based on current knowledge and understanding. - Regulators need to be clear about their requirements and these requirements become constraints on the optimisation process, together with any societal constraints that may be applied in certain programmes. Optimisation therefore requires a permanent dialogue between regulator and implementer. - Once the safety objectives (dose/risk targets and other

  17. Zipf's Law of Abbreviation and the Principle of Least Effort: Language users optimise a miniature lexicon for efficient communication.

    Science.gov (United States)

    Kanwal, Jasmeen; Smith, Kenny; Culbertson, Jennifer; Kirby, Simon

    2017-08-01

    The linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency; the more frequent a word is, the shorter it tends to be. He claimed that this "Law of Abbreviation" is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently-his famous Principle of Least Effort. In this study, we use a miniature artificial language learning paradigm to provide direct experimental evidence for this explanatory hypothesis. We show that language users optimise form-meaning mappings only when pressures for accuracy and efficiency both operate during a communicative task, supporting Zipf's conjecture that the Principle of Least Effort can explain this universal feature of word length distributions. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  19. Evolutionary programming for neutron instrument optimisation

    International Nuclear Information System (INIS)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  20. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    Science.gov (United States)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  1. Optimisation of the PCR-invA primers for the detection of Salmonella ...

    African Journals Online (AJOL)

    A polymerase chain reaction (PCR)-based method for the detection of Salmonella species in water samples was optimised and evaluated for speed, specificity and sensitivity. Optimisation of Mg2+ and primer concentrations and cycling parameters increased the sensitivity and limit of detection of PCR to 2.6 x 104 cfu/m.

  2. A comparison of forward planning and optimised inverse planning

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony; Webb, Steve

    1995-01-01

    A radiotherapy treatment plan optimisation algorithm has been applied to 48 prostate plans and the results compared with those of an experienced human planner. Twelve patients were used in the study, and a 3, 4, 6 and 8 field plan (with standard coplanar beam angles for each plan type) were optimised by both the human planner and the optimisation algorithm. The human planner 'optimised' the plan by conventional forward planning techniques. The optimisation algorithm was based on fast-simulated-annealing. 'Importance factors' assigned to different regions of the patient provide a method for controlling the algorithm, and it was found that the same values gave good results for almost all plans. The plans were compared on the basis of dose statistics and normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results show that the optimisation algorithm yielded results that were at least as good as the human planner for all plan types, and on the whole slightly better. A study of the beam-weights chosen by the optimisation algorithm and the planner will be presented. The optimisation algorithm showed greater variation, in response to individual patient geometry. For simple (e.g. 3 field) plans it was found to consistently achieve slightly higher TCP and lower NTCP values. For more complicated (e.g. 8 fields) plans the optimisation also achieved slightly better results with generally less numbers of beams. The optimisation time was always ≤5 minutes; a factor of up to 20 times faster than the human planner

  3. Reduction environmental effects of civil aircraft through multi-objective flight plan optimisation

    International Nuclear Information System (INIS)

    Lee, D S; Gonzalez, L F; Walker, R; Periaux, J; Onate, E

    2010-01-01

    With rising environmental alarm, the reduction of critical aircraft emissions including carbon dioxides (CO 2 ) and nitrogen oxides (NO x ) is one of most important aeronautical problems. There can be many possible attempts to solve such problem by designing new wing/aircraft shape, new efficient engine, etc. The paper rather provides a set of acceptable flight plans as a first step besides replacing current aircrafts. The paper investigates a green aircraft design optimisation in terms of aircraft range, mission fuel weight (CO 2 ) and NO x using advanced Evolutionary Algorithms coupled to flight optimisation system software. Two multi-objective design optimisations are conducted to find the best set of flight plans for current aircrafts considering discretised altitude and Mach numbers without designing aircraft shape and engine types. The objectives of first optimisation are to maximise range of aircraft while minimising NO x with constant mission fuel weight. The second optimisation considers minimisation of mission fuel weight and NO x with fixed aircraft range. Numerical results show that the method is able to capture a set of useful trade-offs that reduce NO x and CO 2 (minimum mission fuel weight).

  4. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    Science.gov (United States)

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  5. An exergy-based multi-objective optimisation model for energy retrofit strategies in non-domestic buildings

    International Nuclear Information System (INIS)

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul

    2016-01-01

    While the building sector has a significant thermodynamic improvement potential, exergy analysis has been shown to provide new insight for the optimisation of building energy systems. This paper presents an exergy-based multi-objective optimisation tool that aims to assess the impact of a diverse range of retrofit measures with a focus on non-domestic buildings. EnergyPlus was used as a dynamic calculation engine for first law analysis, while a Python add-on was developed to link dynamic exergy analysis and a Genetic Algorithm optimisation process with the aforementioned software. Two UK archetype case studies (an office and a primary school) were used to test the feasibility of the proposed framework. Different measures combinations based on retrofitting the envelope insulation levels and the application of different HVAC configurations were assessed. The objective functions in this study are annual energy use, occupants' thermal comfort, and total building exergy destructions. A large range of optimal solutions was achieved highlighting the framework capabilities. The model achieved improvements of 53% in annual energy use, 51% of exergy destructions and 66% of thermal comfort for the school building, and 50%, 33%, and 80% for the office building. This approach can be extended by using exergoeconomic optimisation. - Highlights: • Integration of dynamic exergy analysis into a retrofit-oriented simulation tool. • Two UK non-domestic building archetypes are used as case studies. • The model delivers non-dominated solutions based on energy, exergy and comfort. • Exergy destructions of ERMs are optimised using GA algorithms. • Strengths and limitations of the proposed exergy-based framework are discussed.

  6. Methods of Optimisation of the Structure of the Dealing Bank with a Limited Base of Counter-Agents

    Directory of Open Access Journals (Sweden)

    Novak Sergіy M.

    2013-11-01

    Full Text Available The article considers methods of assessment of optimal parameters of the dealing bank service with a limited base of counter-agents. The methods are based on the mathematical model of micro-structure of the inter-bank currency market. The key parameters of the infrastructure of the dealing service within the framework of the model are: number of authorised traders, contingent of counter-agents, quotation policy, main parameters of the currency market – spread and volatility of quotations and the resulting indicators of efficiency of the dealing service – profit and probability of breakeven operation. The methods allow identification of optimal parameters of the infrastructure of the dealing bank service based on indicators of dynamics of currency risks and market environment of the bank. On the basis of the developed mathematical model the article develops methods of planning calculations of parameters of the infrastructure of the dealing bank service, which are required for ensuring a necessary level of efficiency with set parameters of the currency market. Application of the said methods gives a possibility to assess indicators of operation of the bank’s front office depending on its scale.

  7. An efficient digital signal processing method for RRNS-based DS-CDMA systems

    Directory of Open Access Journals (Sweden)

    Peter Olsovsky

    2017-09-01

    Full Text Available This paper deals with an efficient method for achieving low power and high speed in advanced Direct-Sequence Code Division Multiple-Access (DS-CDMA wireless communication systems based on the Residue Number System (RNS. A modified algorithm for multiuser DS-CDMA signal generation in MATLAB is proposed and investigated. The most important characteristics of the generated PN code are also presented. Subsequently, a DS-CDMA system based on the combination of the RNS or the so-called Redundant Residue Number System (RRNS is proposed. The enhanced method using a spectrally efficient 8-PSK data modulation scheme to improve the bandwidth efficiency for RRNS-based DS-CDMA systems is presented. By using the C-measure (complexity measure of the error detection function, it is possible to estimate the size of the circuit. Error detection function in RRNSs can be efficiently implemented by LookUp Table (LUT cascades.

  8. Electricity storages - optimised operation based on spot market prices; Stromspeicher. Optimierte Fahrweise auf Basis der Spotmarktpreise

    Energy Technology Data Exchange (ETDEWEB)

    Bernhard, Dominik; Roon, Serafin von [FfE Forschungsstelle fuer Energiewirtschaft e.V., Muenchen (Germany)

    2010-06-15

    With its integrated energy and climate package the last federal government set itself ambitious goals for the improvement of energy efficiency and growth of renewable energy production. These goals were confirmed by the new government in its coalition agreement. However, they can only be realised if the supply of electricity from fluctuating renewable sources can be made to coincide with electricity demand. Electricity storages are therefore an indispensable component of the future energy supply system. This article studies the optimised operation of an electricity storage based on spot market prices and the influence of wind power production up to the year 2020.

  9. Modelling study, efficiency analysis and optimisation of large-scale Adiabatic Compressed Air Energy Storage systems with low-temperature thermal storage

    International Nuclear Information System (INIS)

    Luo, Xing; Wang, Jihong; Krupke, Christopher; Wang, Yue; Sheng, Yong; Li, Jian; Xu, Yujie; Wang, Dan; Miao, Shihong; Chen, Haisheng

    2016-01-01

    Highlights: • The paper presents an A-CAES system thermodynamic model with low temperature thermal energy storage integration. • The initial parameter value ranges for A-CAES system simulation are identified from the study of a CAES plant in operation. • The strategies of system efficiency improvement are investigated via a parametric study with a sensitivity analysis. • Various system configurations are discussed for analysing the efficiency improvement potentials. - Abstract: The key feature of Adiabatic Compressed Air Energy Storage (A-CAES) is the reuse of the heat generated from the air compression process at the stage of air expansion. This increases the complexity of the whole system since the heat exchange and thermal storage units must have the capacities and performance to match the air compression/expansion units. Thus it raises a strong demand in the whole system modelling and simulation tool for A-CAES system optimisation. The paper presents a new whole system mathematical model for A-CAES with simulation implementation and the model is developed with consideration of lowing capital cost of the system. The paper then focuses on the study of system efficiency improvement strategies via parametric analysis and system structure optimisation. The paper investigates how the system efficiency is affected by the system component performance and parameters. From the study, the key parameters are identified, which give dominant influences in improving the system efficiency. The study is extended onto optimal system configuration and the recommendations are made for achieving higher efficiency, which provides a useful guidance for A-CAES system design.

  10. Optimisation and symmetry in experimental radiation physics

    International Nuclear Information System (INIS)

    Ghose, A.

    1988-01-01

    The present monograph is concerned with the optimisation of geometric factors in radiation physics experiments. The discussions are essentially confined to those systems in which optimisation is equivalent to symmetrical configurations of the measurement systems. They include, measurements of interaction cross section of diverse types, determination of polarisations, development of detectors with almost ideal characteristics, production of radiations with continuously variable energies and development of high efficiency spectrometers etc. The monograph is intended for use by experimental physicists investigating primary interactions of radiations with matter and associated technologies. We have illustrated the various optimisation procedures by considering the cases of the so-called ''14 MeV'' on d-t neutrons and gamma rays with energies less than 3 MeV. Developments in fusion technology are critically dependent on the availability accurate cross sections of nuclei for fast neutrons of energies at least as high as d-t neutrons. In this monograph we have discussed various techniques which can be used to improve the accuracy of such measurements and have also presented a method for generating almost monoenergetic neutrons in the 8 MeV to 13 MeV energy range which can be used to measure cross sections in this sparingly investigated region

  11. HVAC system optimisation-in-building section

    Energy Technology Data Exchange (ETDEWEB)

    Lu, L.; Cai, W.; Xie, L.; Li, S.; Soh, Y.C. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore (Singapore)

    2004-07-01

    This paper presents a practical method to optimise in-building section of centralised Heating, Ventilation and Air-Conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimisation method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  12. Topology optimisation of natural convection problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations...... coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences...... in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach...

  13. A systematic procedure to optimise dose and image quality for the measurement of inter-vertebral angles from lateral spinal projections using Cobb and superimposition methods.

    Science.gov (United States)

    Al Qaroot, Bashar; Hogg, Peter; Twiste, Martin; Howard, David

    2014-01-01

    Patients with vertebral column deformations are exposed to high risks associated with ionising radiation exposure. Risks are further increased due to the serial X-ray images that are needed to measure and asses their spinal deformation using Cobb or superimposition methods. Therefore, optimising such X-ray practice, via reducing dose whilst maintaining image quality, is a necessity. With a specific focus on lateral thoraco-lumbar images for Cobb and superimposition measurements, this paper outlines a systematic procedure to the optimisation of X-ray practice. Optimisation was conducted based on suitable image quality from minimal dose. Image quality was appraised using a visual-analogue-rating-scale, and Monte-Carlo modelling was used for dose estimation. The optimised X-ray practice was identified by imaging healthy normal-weight male adult living human volunteers. The optimised practice consisted of: anode towards the head, broad focus, no OID or grid, 80 kVp, 32 mAs and 130 cm SID. Images of suitable quality for laterally assessing spinal conditions using Cobb or superimposition measurements were produced from an effective dose of 0.05 mSv, which is 83% less than the average effective dose used in the UK for lateral thoracic/lumbar exposures. This optimisation procedure can be adopted and use for optimisation of other radiographic techniques.

  14. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    African Journals Online (AJOL)

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  15. A constriction factor based particle swarm optimisation algorithm to solve the economic dispatch problem including losses

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven; Montakhab, Mohammad; Nouri, Hassan

    2011-07-15

    Economic dispatch (ED) is one of the most important problems to be solved in power generation as fractional percentage fuel reductions represent significant cost savings. ED wishes to optimise the power generated by each generating unit in a system in order to find the minimum operating cost at a required load demand, whilst ensuring both equality and inequality constraints are met. For the process of optimisation, a model must be created for each generating unit. The particle swarm optimisation technique is an evolutionary computation technique with one of the most powerful methods for solving global optimisation problems. The aim of this paper is to add in a constriction factor to the particle swarm optimisation algorithm (CFBPSO). Results show that the algorithm is very good at solving the ED problem and that CFBPSO must be able to work in a practical environment and so a valve point effect with transmission losses should be included in future work.

  16. Intelligent Internet-based information system optimises diabetes mellitus management in communities.

    Science.gov (United States)

    Wei, Xuejuan; Wu, Hao; Cui, Shuqi; Ge, Caiying; Wang, Li; Jia, Hongyan; Liang, Wannian

    2018-05-01

    To evaluate the effect of an intelligent Internet-based information system upon optimising the management of patients diagnosed with type 2 diabetes mellitus (T2DM). In 2015, a T2DM information system was introduced to optimise the management of T2DM patients for 1 year in Fangzhuang community of Beijing, China. A total of 602 T2DM patients who were registered in the health service centre of Fangzhuang community were enrolled based on an isometric sampling technique. The data from 587 patients were used in the final analysis. The intervention effect was subsequently assessed by statistically comparing multiple parameters, such as the prevalence of glycaemic control, standard health management and annual outpatient consultation visits per person, before and after the implementation of the T2DM information system. In 2015, a total of 1668 T2DM patients were newly registered in Fangzhuang community. The glycaemic control rate was calculated as 37.65% in 2014 and significantly elevated up to 62.35% in 2015 ( p information system, the rate of standard health management was increased from 48.04% to 85.01% ( p information system optimised the management of T2DM patients in Fangzhuang community and decreased the outpatient numbers in both community and general hospitals, which played a positive role in assisting T2DM patients and their healthcare providers to better manage this chronic illness.

  17. Optimisation of liquid scintillation counting conditions to determine low activity levels of tritium and radiostrontium in aqueous solution

    Energy Technology Data Exchange (ETDEWEB)

    Rauret, Gemma; Mestres, J.S.; Ribera, Merce; Rajadel, Pilar (Barcelona Univ. (Spain). Dept. de Quimica Analitica)

    1990-08-01

    An optimisation of the counting conditions for the measurement of aqueous solutions of tritium or radiostrontium using Insta-Gel II as scintillator is presented. The variables optimised were the window, the ratio of mass of sample to mass of scintillator and the total volume of the counting mixture. An optimisation function which takes into account each of these variables, the background and also the efficiency is proposed. The conditions established allow the lowest possible detection limit to be reached. For tritium, this value was compared with that obtained when the standard method for water analysis was applied. (author).

  18. Optimised Design and Analysis of All-Optical Networks

    DEFF Research Database (Denmark)

    Glenstrup, Arne John

    2002-01-01

    through various experiments and is shown to produce good results and to be able to scale up to networks of realistic sizes. A novel method, subpath wavelength grouping, for routing connections in a multigranular all-optical network where several wavelengths can be grouped and switched at band and fibre......This PhD thesis presents a suite of methods for optimising design and for analysing blocking probabilities of all-optical networks. It thus contributes methodical knowledge to the field of computer assisted planning of optical networks. A two-stage greenfield optical network design optimiser...... is developed, based on shortest-path algorithms and a comparatively new metaheuristic called simulated allocation. It is able to handle design of all-optical mesh networks with optical cross-connects, considers duct as well as fibre and node costs, and can also design protected networks. The method is assessed...

  19. Optimisation of electrical system for offshore wind farms via genetic algorithm

    DEFF Research Database (Denmark)

    Chen, Zhe; Zhao, Menghua; Blaabjerg, Frede

    2009-01-01

    An optimisation platform based on genetic algorithm (GA) is presented, where the main components of a wind farm and key technical specifications are used as input parameters and the electrical system design of the wind farm is optimised in terms of both production cost and system reliability....... The power losses, wind power production, initial investment and maintenance costs are considered in the production cost. The availability of components and network redundancy are included in the reliability evaluation. The method of coding an electrical system to a binary string, which is processed by GA......, is developed. Different GA techniques are investigated based on a real example offshore wind farm. This optimisation platform has been demonstrated as a powerful tool for offshore wind farm design and evaluation....

  20. Optimisation of a Swedish district heating system with reduced heat demand due to energy efficiency measures in residential buildings

    International Nuclear Information System (INIS)

    Åberg, M.; Henning, D.

    2011-01-01

    The development towards more energy efficient buildings, as well as the expansion of district heating (DH) networks, is generally considered to reduce environmental impact. But the combined effect of these two progressions is more controversial. A reduced heat demand (HD) due to higher energy efficiency in buildings might hamper co-production of electricity and DH. In Sweden, co-produced electricity is normally considered to displace electricity from less efficient European condensing power plants. In this study, a potential HD reduction due to energy efficiency measures in the existing building stock in the Swedish city Linköping is calculated. The impact of HD reduction on heat and electricity production in the Linköping DH system is investigated by using the energy system optimisation model MODEST. Energy efficiency measures in buildings reduce seasonal HD variations. Model results show that HD reductions primarily decrease heat-only production. The electricity-to-heat output ratio for the system is increased for HD reductions up to 30%. Local and global CO 2 emissions are reduced. If co-produced electricity replaces electricity from coal-fired condensing power plants, a 20% HD reduction is optimal for decreasing global CO 2 emissions in the analysed DH system. - Highlights: ► A MODEST optimisation model of the Linköping district heating system is used. ► The impact of heat demand reduction on heat and electricity production is examined. ► Model results show that heat demand reductions decrease heat-only production. ► Local and global CO 2 emissions are reduced. ► The system electricity-to-heat output increases for reduced heat demand up to 30%.

  1. Time since discharge of 9mm cartridges by headspace analysis, part 1: Comprehensive optimisation and validation of a headspace sorptive extraction (HSSE) method.

    Science.gov (United States)

    Gallidabino, M; Romolo, F S; Weyermann, C

    2017-03-01

    Estimating the time since discharge of spent cartridges can be a valuable tool in the forensic investigation of firearm-related crimes. To reach this aim, it was previously proposed that the decrease of volatile organic compounds released during discharge is monitored over time using non-destructive headspace extraction techniques. While promising results were obtained for large-calibre cartridges (e.g., shotgun shells), handgun calibres yielded unsatisfying results. In addition to the natural complexity of the specimen itself, these can also be attributed to some selective choices in the methods development. Thus, the present series of paper aimed to more systematically evaluate the potential of headspace analysis to estimate the time since discharge of cartridges through the use of more comprehensive analytical and interpretative techniques. Specifically, in this first part, a method based on headspace sorptive extraction (HSSE) was comprehensively optimised and validated, as the latter recently proved to be a more efficient alternative than previous approaches. For this purpose, 29 volatile organic compounds were preliminary selected on the basis of previous works. A multivariate statistical approach based on design of experiments (DOE) was used to optimise variables potentially involved in interaction effects. Introduction of deuterated analogues in sampling vials was also investigated as strategy to account for analytical variations. Analysis was carried out by selected ion mode, gas chromatography coupled to mass spectrometry (GC-MS). Results showed good chromatographic resolution as well as detection limits and peak area repeatability. Application to 9mm spent cartridges confirmed that the use of co-extracted internal standards allowed for improved reproducibility of the measured signals. The validated method will be applied in the second part of this work to estimate the time since discharge of 9mm spent cartridges using multivariate models. Copyright

  2. A method for the calculation of collision strengths for complex atomic structures based on Slater parameter optimisation

    International Nuclear Information System (INIS)

    Fawcett, B.C.; Mason, H.E.

    1989-02-01

    This report presents details of a new method to enable the computation of collision strengths for complex ions which is adapted from long established optimisation techniques previously applied to the calculation of atomic structures and oscillator strengths. The procedure involves the adjustment of Slater parameters so that they determine improved energy levels and eigenvectors. They provide a basis for collision strength calculations in ions where ab initio computations break down or result in reducible errors. This application is demonstrated through modifications of the DISTORTED WAVE collision code and SUPERSTRUCTURE atomic-structure code which interface via a transformation code JAJOM which processes their output. (author)

  3. Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.

    Science.gov (United States)

    Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae

    2017-06-26

    Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.

  4. Multi-Dimensional Bitmap Indices for Optimising Data Access within Object Oriented Databases at CERN

    CERN Document Server

    Stockinger, K

    2001-01-01

    Efficient query processing in high-dimensional search spaces is an important requirement for many analysis tools. In the literature on index data structures one can find a wide range of methods for optimising database access. In particular, bitmap indices have recently gained substantial popularity in data warehouse applications with large amounts of read mostly data. Bitmap indices are implemented in various commercial database products and are used for querying typical business applications. However, scientific data that is mostly characterised by non-discrete attribute values cannot be queried efficiently by the techniques currently supported. In this thesis we propose a novel access method based on bitmap indices that efficiently handles multi-dimensional queries against typical scientific data. The algorithm is called GenericRangeEval and is an extension of a bitmap index for discrete attribute values. By means of a cost model we study the performance of queries with various selectivities against uniform...

  5. Ant colony optimisation for economic dispatch problem with non-smooth cost functions

    Energy Technology Data Exchange (ETDEWEB)

    Pothiya, Saravuth; Kongprawechnon, Waree [School of Communication, Instrumentation and Control, Sirindhorn International Institute of Technology, Thammasat University, P.O. Box 22, Pathumthani (Thailand); Ngamroo, Issarachai [Center of Excellence for Innovative Energy Systems, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Bangkok 10520 (Thailand)

    2010-06-15

    This paper presents a novel and efficient optimisation approach based on the ant colony optimisation (ACO) for solving the economic dispatch (ED) problem with non-smooth cost functions. In order to improve the performance of ACO algorithm, three additional techniques, i.e. priority list, variable reduction, and zoom feature are presented. To show its efficiency and effectiveness, the proposed ACO is applied to two types of ED problems with non-smooth cost functions. Firstly, the ED problem with valve-point loading effects consists of 13 and 40 generating units. Secondly, the ED problem considering the multiple fuels consists of 10 units. Additionally, the results of the proposed ACO are compared with those of the conventional heuristic approaches. The experimental results show that the proposed ACO approach is comparatively capable of obtaining higher quality solution and faster computational time. (author)

  6. Structural optimisation of a high speed Organic Rankine Cycle generator using a genetic algorithm and a finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Palko, S. [Machines Division, ABB industry Oy, Helsinki (Finland)

    1997-12-31

    The aim in this work is to design a 250 kW high speed asynchronous generator using a genetic algorithm and a finite element method for Organic Rankine Cycle. The characteristics of the induction motors are evaluated using two-dimensional finite element method (FEM) The movement of the rotor and the non-linearity of the iron is included. In numerical field problems it is possible to find several local extreme for an optimisation problem, and therefore the algorithm has to be capable of determining relevant changes, and to avoid trapping to a local minimum. In this work the electromagnetic (EM) losses at the rated point are minimised. The optimisation includes the air gap region. Parallel computing is applied to speed up optimisation. (orig.) 2 refs.

  7. Efficient Rank Reduction of Correlation Matrices

    NARCIS (Netherlands)

    I. Grubisic (Igor); R. Pietersz (Raoul)

    2005-01-01

    textabstractGeometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established,

  8. Optimisation: how to develop stake holder involvement

    International Nuclear Information System (INIS)

    Weiss, W.

    2003-01-01

    The Precautionary Principle is an internationally recognised approach for dealing with risk situations characterised by uncertainties and potential irreversible damages. Since the late fifties, ICRP has adopted this prudent attitude because of the lack of scientific evidence concerning the existence of a threshold at low doses for stochastic effects. The 'linear, no-threshold' model and the 'optimisation of protection' principle have been developed as a pragmatic response for the management of the risk. The progress in epidemiology and radiobiology over the last decades have affirmed the initial assumption and the optimisation remains the appropriate response for the application of the precautionary principle in the context of radiological protection. The basic objective of optimisation is, for any source within the system of radiological protection, to maintain the level of exposure as low as reasonably achievable, taking into account social and economical factors. Methods tools and procedures have been developed over the last two decades to put into practice the optimisation principle with a central role given to the cost-benefit analysis as a means to determine the optimised level of protection. However, with the advancement in the implementation of the principle more emphasis was progressively given to good practice, as well as on the importance of controlling individual levels of exposure through the optimisation process. In the context of the revision of its present recommendations, the Commission is reenforcing the emphasis on protection of the individual with the adoption of an equity-based system that recognizes individual rights and a basic level of health protection. Another advancement is the role that is now recognised to 'stakeholders involvement' in the optimisation process as a mean to improve the quality of the decision aiding process for identifying and selecting protection actions considered as being accepted by all those involved. The paper

  9. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...

  10. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    Science.gov (United States)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  11. Optimisation of pressurised liquid extraction (PLE) for rapid and efficient extraction of superficial and total mineral oil contamination from dry foods.

    Science.gov (United States)

    Moret, Sabrina; Scolaro, Marianna; Barp, Laura; Purcaro, Giorgia; Sander, Maren; Conte, Lanfranco S

    2014-08-15

    Pressurised liquid extraction (PLE) represents a powerful technique which can be conveniently used for rapid extraction of mineral oil saturated (MOSH) and aromatic hydrocarbons (MOAH) from dry foods with a low fat content, such as semolina pasta, rice, and other cereals. Two different PLE methods, one for rapid determination of superficial contamination mainly from the packaging, the other for efficient extraction of total contamination from different sources, have been developed and optimised. The two methods presented good performance characteristics in terms of repeatability (relative standard deviation lower than 5%) and recoveries (higher than 95%). To show their potentiality, the two methods have been applied in combination on semolina pasta and rice packaged in direct contact with recycled cardboard. In the case of semolina pasta it was possible to discriminate between superficial contamination coming from the packaging, and pre-existing contamination (firmly enclosed into the matrix). Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM) and genetic algorithm method (GA)

    Science.gov (United States)

    Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.

  13. Friction stir welding: multi-response optimisation using Taguchi-based GRA

    Directory of Open Access Journals (Sweden)

    Jitender Kundu

    2016-01-01

    Full Text Available In present experimental work, friction stir welding of aluminium alloy 5083- H321 is performed for optimisation of process parameters for maximum tensile strength. Taguchi’s L9 orthogonal array has been used for three parameters – tool rotational speed (TRS, traverse speed (TS, and tool tilt angle (TTA with three levels. Multi-response optimisation has been carried out through Taguchi-based grey relational analysis. The grey relational grade has been calculated for all three responses – ultimate tensile strength, percentage elongation, and micro-hardness. Analysis of variance is the tool used for obtaining grey relational grade to find out the significant process parameters. TRS and TS are the two most significant parameters which influence most of the quality characteristics of friction stir welded joint. Validation of predicted values done through confirmation experiments at optimum setting shows a good agreement with experimental values.

  14. High efficient plastic solar cells fabricated with a high-throughput gravure printing method

    Energy Technology Data Exchange (ETDEWEB)

    Kopola, P.; Jin, H.; Tuomikoski, M.; Maaninen, A.; Hast, J. [VTT, Kaitovaeylae 1, FIN-90571 Oulu (Finland); Aernouts, T. [IMEC, Organic PhotoVoltaics, Polymer and Molecular Electronics, Kapeldreef 75, B-3001 Leuven (Belgium); Guillerez, S. [CEA-INES RDI, 50 Avenue Du Lac Leman, 73370 Le Bourget Du Lac (France)

    2010-10-15

    We report on polymer-based solar cells prepared by the high-throughput roll-to-roll gravure printing method. The engravings of the printing plate, along with process parameters like printing speed and ink properties, are studied to optimise the printability of the photoactive as well as the hole transport layer. For the hole transport layer, the focus is on testing different formulations to produce thorough wetting of the indium-tin-oxide (ITO) substrate. The challenge for the photoactive layer is to form a uniform layer with optimal nanomorphology in the poly-3-hexylthiophene (P3HT) and [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) blend. This results in a power conversion efficiency of 2.8% under simulated AM1.5G solar illumination for a solar cell device with gravure-printed hole transport and a photoactive layer. (author)

  15. An effective approach to reducing strategy space for maintenance optimisation of multistate series–parallel systems

    International Nuclear Information System (INIS)

    Zhou, Yifan; Lin, Tian Ran; Sun, Yong; Bian, Yangqing; Ma, Lin

    2015-01-01

    Maintenance optimisation of series–parallel systems is a research topic of practical significance. Nevertheless, a cost-effective maintenance strategy is difficult to obtain due to the large strategy space for maintenance optimisation of such systems. The heuristic algorithm is often employed to deal with this problem. However, the solution obtained by the heuristic algorithm is not always the global optimum and the algorithm itself can be very time consuming. An alternative method based on linear programming is thus developed in this paper to overcome such difficulties by reducing strategy space of maintenance optimisation. A theoretical proof is provided in the paper to verify that the proposed method is at least as effective as the existing methods for strategy space reduction. Numerical examples for maintenance optimisation of series–parallel systems having multistate components and considering both economic dependence among components and multiple-level imperfect maintenance are also presented. The simulation results confirm that the proposed method is more effective than the existing methods in removing inappropriate maintenance strategies of multistate series–parallel systems. - Highlights: • A new method using linear programming is developed to reduce the strategy space. • The effectiveness of the new method for strategy reduction is theoretically proved. • Imperfect maintenance and economic dependence are considered during optimisation

  16. Simultaneous Topology, Shape, and Sizing Optimisation of Plane Trusses with Adaptive Ground Finite Elements Using MOEAs

    Directory of Open Access Journals (Sweden)

    Norapat Noilublao

    2013-01-01

    Full Text Available This paper proposes a novel integrated design strategy to accomplish simultaneous topology shape and sizing optimisation of a two-dimensional (2D truss. An optimisation problem is posed to find a structural topology, shape, and element sizes of the truss such that two objective functions, mass and compliance, are minimised. Design constraints include stress, buckling, and compliance. The procedure for an adaptive ground elements approach is proposed and its encoding/decoding process is detailed. Two sets of design variables defining truss layout, shape, and element sizes at the same time are applied. A number of multiobjective evolutionary algorithms (MOEAs are implemented to solve the design problem. Comparative performance based on a hypervolume indicator shows that multiobjective population-based incremental learning (PBIL is the best performer. Optimising three design variable types simultaneously is more efficient and effective.

  17. A methodological approach to the design of optimising control strategies for sewer systems

    DEFF Research Database (Denmark)

    Mollerup, Ane Loft; Mikkelsen, Peter Steen; Sin, Gürkan

    2016-01-01

    This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters. Accordin......This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters....... Accordingly, two novel optimisation configurations are developed, where the optimisation either acts on the actuators or acts on the regulatory control layer. These two optimisation designs are evaluated on a sub-catchment of the sewer system in Copenhagen, and found to perform better than the existing...

  18. Efficient generalized Golub-Kahan based methods for dynamic inverse problems

    Science.gov (United States)

    Chung, Julianne; Saibaba, Arvind K.; Brown, Matthew; Westman, Erik

    2018-02-01

    We consider efficient methods for computing solutions to and estimating uncertainties in dynamic inverse problems, where the parameters of interest may change during the measurement procedure. Compared to static inverse problems, incorporating prior information in both space and time in a Bayesian framework can become computationally intensive, in part, due to the large number of unknown parameters. In these problems, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible, so we consider efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods for dynamic inversion can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples from photoacoustic tomography, space-time deblurring, and passive seismic tomography demonstrate the range of applicability and effectiveness of the described approaches. Specifically, in passive seismic tomography, we demonstrate our approach on both synthetic and real data. To demonstrate the scalability of our algorithm, we solve a dynamic inverse problem with approximately 43 000 measurements and 7.8 million unknowns in under 40 s on a standard desktop.

  19. A pilot investigation to optimise methods for a future satiety preload study

    OpenAIRE

    Hobden, Mark R.; Guérin-Deremaux, Laetitia; Commane, Daniel M.; Rowland, Ian; Gibson, Glenn R.; Kennedy, Orla B.

    2017-01-01

    Background Preload studies are used to investigate the satiating effects of foods and food ingredients. However, the design of preload studies is complex, with many methodological considerations influencing appetite responses. The aim of this pilot investigation was to determine acceptability, and optimise methods, for a future satiety preload study. Specifically, we investigated the effects of altering (i) energy intake at a standardised breakfast (gender-specific or non-gender specific), an...

  20. The Energy-Efficient Quarry: Towards improved understanding and optimisation of energy use and minimisation of CO2 generation in the aggregates industry.

    Science.gov (United States)

    Hill, Ian; White, Toby; Owen, Sarah

    2014-05-01

    industry on a web-based platform. This tool guides quarry managers and operators through the complex, multi-layered, iterative, process of assessing the energy efficiency of their own quarry operation. They are able to evaluate the optimisation of the energy-efficiency of the overall quarry through examining both the individual stages of processing, and the interactions between them. The project is also developing on-line distance learning modules designed for Continuous Professional Development (CPD) activities for staff across the quarrying industry in the EU and beyond. The presentation will describe development of the model, and the format and scope of the resulting software tool and its user-support available to the quarrying industry.

  1. Validation of a large-scale audit technique for CT dose optimisation

    International Nuclear Information System (INIS)

    Wood, T. J.; Davis, A. W.; Moore, C. S.; Beavis, A. W.; Saunderson, J. R.

    2008-01-01

    The expansion and increasing availability of computed tomography (CT) imaging means that there is a greater need for the development of efficient optimisation strategies that are able to inform clinical practice, without placing a significant burden on limited departmental resources. One of the most fundamental aspects to any optimisation programme is the collection of patient dose information, which can be compared with appropriate diagnostic reference levels. This study has investigated the implementation of a large-scale audit technique, which utilises data that already exist in the radiology information system, to determine typical doses for a range of examinations on four CT scanners. This method has been validated against what is considered the 'gold standard' technique for patient dose audits, and it has been demonstrated that results equivalent to the 'standard-sized patient' can be inferred from this much larger data set. This is particularly valuable where CT optimisation is concerned as it is considered a 'high dose' technique, and hence close monitoring of patient dose is particularly important. (authors)

  2. An Efficient Graph-based Method for Long-term Land-use Change Statistics

    Directory of Open Access Journals (Sweden)

    Yipeng Zhang

    2015-12-01

    Full Text Available Statistical analysis of land-use change plays an important role in sustainable land management and has received increasing attention from scholars and administrative departments. However, the statistical process involving spatial overlay analysis remains difficult and needs improvement to deal with mass land-use data. In this paper, we introduce a spatio-temporal flow network model to reveal the hidden relational information among spatio-temporal entities. Based on graph theory, the constant condition of saturated multi-commodity flow is derived. A new method based on a network partition technique of spatio-temporal flow network are proposed to optimize the transition statistical process. The effectiveness and efficiency of the proposed method is verified through experiments using land-use data in Hunan from 2009 to 2014. In the comparison among three different land-use change statistical methods, the proposed method exhibits remarkable superiority in efficiency.

  3. Energy thermal management in commercial bread-baking using a multi-objective optimisation framework

    International Nuclear Information System (INIS)

    Khatir, Zinedine; Taherkhani, A.R.; Paton, Joe; Thompson, Harvey; Kapur, Nik; Toropov, Vassili

    2015-01-01

    In response to increasing energy costs and legislative requirements energy efficient high-speed air impingement jet baking systems are now being developed. In this paper, a multi-objective optimisation framework for oven designs is presented which uses experimentally verified heat transfer correlations and high fidelity Computational Fluid Dynamics (CFD) analyses to identify optimal combinations of design features which maximise desirable characteristics such as temperature uniformity in the oven and overall energy efficiency of baking. A surrogate-assisted multi-objective optimisation framework is proposed and used to explore a range of practical oven designs, providing information on overall temperature uniformity within the oven together with ensuing energy usage and potential savings. - Highlights: • A multi-objective optimisation framework to design commercial ovens is presented. • High fidelity CFD embeds experimentally calibrated heat transfer inputs. • The optimum oven design minimises specific energy and bake time. • The Pareto front outlining the surrogate-assisted optimisation framework is built. • Optimisation of industrial bread-baking ovens reveals an energy saving of 637.6 GWh

  4. ATLAS software configuration and build tool optimisation

    Science.gov (United States)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of

  5. A method to derive fixed budget results from expected optimisation times

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Jansen, Thomas; Witt, Carsten

    2013-01-01

    At last year's GECCO a novel perspective for theoretical performance analysis of evolutionary algorithms and other randomised search heuristics was introduced that concentrates on the expected function value after a pre-defined number of steps, called budget. This is significantly different from...... the common perspective where the expected optimisation time is analysed. While there is a huge body of work and a large collection of tools for the analysis of the expected optimisation time the new fixed budget perspective introduces new analytical challenges. Here it is shown how results on the expected...... optimisation time that are strengthened by deviation bounds can be systematically turned into fixed budget results. We demonstrate our approach by considering the (1+1) EA on LeadingOnes and significantly improving previous results. We prove that deviating from the expected time by an additive term of ω(n3...

  6. Optimisation of Heterogeneous Migration Paths to High Bandwidth Home Connections

    NARCIS (Netherlands)

    Phillipson, F.

    2017-01-01

    Operators are building architectures and systems for delivering voice, audio, and data services at the required speed for now and in the future. For fixed access networks, this means in many countries a shift from copper based to fibre based access networks. This paper proposes a method to optimise

  7. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi; Collier, Nathan; Niemi, Antti; Calo, Victor M.

    2012-01-01

    optimised shapes produce efficient flapping flights, the wake pattern and its vorticity strength are examined. This work described in this paper should facilitate better guidance for shape design of engineered flying systems.

  8. An Optimisation Approach for Room Acoustics Design

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  9. Study on Parallel Processing for Efficient Flexible Multibody Analysis based on Subsystem Synthesis Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jong-Boo; Song, Hajun; Kim, Sung-Soo [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-06-15

    Flexible multibody simulations are widely used in the industry to design mechanical systems. In flexible multibody dynamics, deformation coordinates are described either relatively in the body reference frame that is floating in the space or in the inertial reference frame. Moreover, these deformation coordinates are generated based on the discretization of the body according to the finite element approach. Therefore, the formulation of the flexible multibody system always deals with a huge number of degrees of freedom and the numerical solution methods require a substantial amount of computational time. Parallel computational methods are a solution for efficient computation. However, most of the parallel computational methods are focused on the efficient solution of large-sized linear equations. For multibody analysis, we need to develop an efficient formulation that could be suitable for parallel computation. In this paper, we developed a subsystem synthesis method for a flexible multibody system and proposed efficient parallel computational schemes based on the OpenMP API in order to achieve efficient computation. Simulations of a rotating blade system, which consists of three identical blades, were carried out with two different parallel computational schemes. Actual CPU times were measured to investigate the efficiency of the proposed parallel schemes.

  10. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  11. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Almen, A J

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  12. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    International Nuclear Information System (INIS)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs

  13. Approaches and challenges to optimising primary care teams’ electronic health record usage

    Directory of Open Access Journals (Sweden)

    Nancy Pandhi

    2014-07-01

    Full Text Available Background Although the presence of an electronic health record (EHR alone does not ensure high quality, efficient care, few studies have focused on the work of those charged with optimising use of existing EHR functionality.Objective To examine the approaches used and challenges perceived by analysts supporting the optimisation of primary care teams’ EHR use at a large U.S. academic health care system.Methods A qualitative study was conducted. Optimisation analysts and their supervisor were interviewed and data were analysed for themes.Results Analysts needed to reconcile the tension created by organisational mandates focused on the standardisation of EHR processes with the primary care teams’ demand for EHR customisation. They gained an understanding of health information technology (HIT leadership’s and primary care team’s goals through attending meetings, reading meeting minutes and visiting with clinical teams. Within what was organisationally possible, EHR education could then be tailored to fit team needs. Major challenges were related to organisational attempts to standardise EHR use despite varied clinic contexts, personnel readiness and technical issues with the EHR platform. Forcing standardisation upon clinical needs that current EHR functionality could not satisfy was difficult.Conclusions Dedicated optimisation analysts can add value to health systems through playing a mediating role between HIT leadership and care teams. Our findings imply that EHR optimisation should be performed with an in-depth understanding of the workflow, cognitive and interactional activities in primary care.

  14. Combining optimisation and simulation in an energy systems analysis of a Swedish iron foundry

    International Nuclear Information System (INIS)

    Mardan, Nawzad; Klahr, Roger

    2012-01-01

    To face global competition, and also reduce environmental and climate impact, industry-wide changes are needed, especially regarding energy use, which is closely related to global warming. Energy efficiency is therefore an essential task for the future as it has a significant impact on both business profits and the environment. For the analysis of possible changes in industrial production processes, and to choose what changes should be made, various modelling tools can be used as a decision support. This paper uses two types of energy analysis tool: Discrete Event Simulation (DES) and Energy Systems Optimisation (ESO). The aim of this study is to describe how a DES and an ESO tool can be combined. A comprehensive five-step approach is proposed for reducing system costs and making a more robust production system. A case study representing a new investment in part of a Swedish iron foundry is also included to illustrate the method's use. The method described in this paper is based on the use of the DES program QUEST and the ESO tool reMIND. The method combination itself is generic, i.e. other similar programs can be used as well with some adjustments and adaptations. The results from the case study show that when different boundary conditions are used the result obtained from the simulation tools is not optimum, in other words, the result shows only a feasible solution and not the best way to run the factory. It is therefore important to use the optimisation tool in such cases in order to obtain the optimum operating strategy. By using the optimisation tool a substantial amount of resources can be saved. The results also show that the combination of optimisation and simulation tools is useful to provide very detailed information about how the system works and to predict system behaviour as well as to minimise the system cost. -- Highlights: ► This study describes how a simulation and an optimisation tool can be combined. ► A case study representing a new

  15. Highly efficient parallel direct solver for solving dense complex matrix equations from method of moments

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-03-01

    Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.

  16. Exergy analysis and optimisation of a marine molten carbonate fuel cell system in simple and combined cycle configuration

    International Nuclear Information System (INIS)

    Dimopoulos, George G.; Stefanatos, Iason C.; Kakalis, Nikolaos M.P.

    2016-01-01

    Highlights: • Process modelling and optimisation of an integrated marine MCFC system. • Component-level and spatially distributed exergy analysis and balances. • Optimal simple cycle MCFC system with 45.5% overall exergy efficiency. • Optimal combined cycle MCFC system with 60% overall exergy efficiency. • Combined cycle MCFC system yields 30% CO_2 relative emissions reduction. - Abstract: In this paper we present the exergy analysis and design optimisation of an integrated molten carbonate fuel cell (MCFC) system for marine applications, considering waste heat recovery options for additional power production. High temperature fuel cells are attractive solutions for marine energy systems, as they can significantly reduce gaseous emissions, increase efficiency and facilitate the introduction of more environmentally-friendly fuels, like LNG and biofuels. We consider an already installed MCFC system onboard a sea-going vessel, which has many tightly integrated sub-systems and components: fuel delivery and pre-reforming, internal reforming sections, electrochemical conversion, catalytic burner, air supply and high temperature exhaust gas. The high temperature exhaust gasses offer significant potential for heat recovery that can be directed into both covering the system’s auxiliary heat requirements and power production. Therefore, an integrated systems approach is employed to accurately identify the true sources of losses in the various components and to optimise the overall system with respect to its energy efficiency, taking into account the various trade-offs and subject to several constraints. Here, we present a four-step approach: a. dynamic process models development of simple and combined-cycle MCFC system; b. MCFC components and system models calibration via onboard MCFC measurements; c. exergy analysis, and d. optimisation of the simple and combined-cycle systems with respect to their exergetic performance. Our methodology is based on the

  17. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    DEFF Research Database (Denmark)

    Thøgersen, Emil; Tranberg, Bo; Herp, Jürgen

    2017-01-01

    deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple...... wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using...... the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain...

  18. Optimisation of flavour ester biosynthesis in an aqueous system of coconut cream and fusel oil catalysed by lipase.

    Science.gov (United States)

    Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan

    2012-12-15

    Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The Taguchi design method was used for the first time to optimize the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the optimal conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the Taguchi design method was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Optimising social information by game theory and ant colony method to enhance routing protocol in opportunistic networks

    Directory of Open Access Journals (Sweden)

    Chander Prabha

    2016-09-01

    Full Text Available The data loss and disconnection of nodes are frequent in the opportunistic networks. The social information plays an important role in reducing the data loss because it depends on the connectivity of nodes. The appropriate selection of next hop based on social information is critical for improving the performance of routing in opportunistic networks. The frequent disconnection problem is overcome by optimising the social information with Ant Colony Optimization method which depends on the topology of opportunistic network. The proposed protocol is examined thoroughly via analysis and simulation in order to assess their performance in comparison with other social based routing protocols in opportunistic network under various parameters settings.

  20. An efficient and robutst method for shape-based image retrieval

    International Nuclear Information System (INIS)

    Salih, N.D.; Besar, R.; Abas, F.S.

    2007-01-01

    Shapes can be thought as being the words oft he visual language. Shape boundaries need to be simplified and estimated in a wide variety of image analysis applications. Representation and description of Shapes is one of the major problems in content-based image retrieval (CBIR). This paper present an a novel method for shape representation and description named block-based shape representation (BSR), which is capable of extracting reliable information of the object outline in a concise manner. Our technique is translation, scale, and rotation invariant. It works well on different types of shapes and fast enough for use in real-time. This technique has been implemented and evaluated in order to analyze its accuracy and Efficiency. Based on the experimental results, we urge that the proposed BSR is a compact and reliable shape representation method. (author)

  1. Power supply of Eurotunnel. Optimisation based on traffic and simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Marie, Stephane [SNCF, Direction de l' Ingenierie, Saint-Denis (France). Dept. des Installations Fixes de Traction Electrique; Dupont, Jean-Pierre; Findinier, Bertrand; Maquaire, Christian [Eurotunnel, Coquelles (France)

    2010-12-15

    In order to reduce electrical power costs and also to cope with the significant traffic increase, a new study was carried on feeding the tunnel section from the French power station, thus improving and reinforcing the existing network. Based on a design study established by SNCF engineering department, EUROTUNNEL chose a new electrical scheme to cope with the traffic increase and optimise investments. (orig.)

  2. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    Science.gov (United States)

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms

  3. Optimising a Model of Minimum Stock Level Control and a Model of Standing Order Cycle in Selected Foundry Plant

    Directory of Open Access Journals (Sweden)

    Szymszal J.

    2013-09-01

    Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.

  4. Optimising mobile phase composition, its flow-rate and column temperature in HPLC using taboo search.

    Science.gov (United States)

    Guillaume, Y C; Peyrin, E

    2000-03-06

    A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.

  5. Energy efficiency analysis method based on fuzzy DEA cross-model for ethylene production systems in chemical industry

    International Nuclear Information System (INIS)

    Han, Yongming; Geng, Zhiqiang; Zhu, Qunxiong; Qu, Yixin

    2015-01-01

    DEA (data envelopment analysis) has been widely used for the efficiency analysis of industrial production process. However, the conventional DEA model is difficult to analyze the pros and cons of the multi DMUs (decision-making units). The DEACM (DEA cross-model) can distinguish the pros and cons of the effective DMUs, but it is unable to take the effect of the uncertainty data into account. This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with Fuzzy Data. The proposed method has better objectivity and resolving power for the decision-making. First we obtain the minimum, the median and the maximum values of the multi-criteria ethylene energy consumption data by the data fuzzification. On the basis of the multi-criteria fuzzy data, the benchmark of the effective production situations and the improvement directions of the ineffective of the ethylene plants under different production data configurations are obtained by the FDEACM. The experimental result shows that the proposed method can improve the ethylene production conditions and guide the efficiency of energy utilization during ethylene production process. - Highlights: • This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with data fuzzification. • The proposed method is more efficient and accurate than other methods. • We obtain an energy efficiency analysis framework and process based on FDEACM in ethylene production industry. • The proposed method is valid and efficient in improvement of energy efficiency in the ethylene plants

  6. The Inertia Weight Updating Strategies in Particle Swarm Optimisation Based on the Beta Distribution

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2015-01-01

    Full Text Available The presented paper deals with the comparison of selected random updating strategies of inertia weight in particle swarm optimisation. Six versions of particle swarm optimization were analysed on 28 benchmark functions, prepared for the Special Session on Real-Parameter Single Objective Optimisation at CEC2013. The random components of tested inertia weight were generated from Beta distribution with different values of shape parameters. The best analysed PSO version is the multiswarm PSO, which combines two strategies of updating the inertia weight. The first is driven by the temporally varying shape parameters, while the second is based on random control of shape parameters of Beta distribution.

  7. Optimising Aesthetic Reconstruction of Scalp Soft Tissue by an Algorithm Based on Defect Size and Location.

    Science.gov (United States)

    Ooi, Adrian Sh; Kanapathy, Muholan; Ong, Yee Siang; Tan, Kok Chai; Tan, Bien Keem

    2015-11-01

    Scalp soft tissue defects are common and result from a variety of causes. Reconstructive methods should maximise cosmetic outcomes by maintaining hair-bearing tissue and aesthetic hairlines. This article outlines an algorithm based on a diverse clinical case series to optimise scalp soft tissue coverage. A retrospective analysis of scalp soft tissue reconstruction cases performed at the Singapore General Hospital between January 2004 and December 2013 was conducted. Forty-one patients were included in this study. The majority of defects aesthetic outcome while minimising complications and repeat procedures.

  8. Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation

    Science.gov (United States)

    Liu, Qiang; Wang, Chengen

    2012-08-01

    Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.

  9. Automation of route identification and optimisation based on data-mining and chemical intuition.

    Science.gov (United States)

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  10. Topology optimisation of micro fluidic mixers considering fluid-structure interactions with a coupled Lattice Boltzmann algorithm

    Science.gov (United States)

    Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.

    2017-11-01

    Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.

  11. The optimisation of the laser-induced forward transfer process for fabrication of polyfluorene-based organic light-emitting diode pixels

    Science.gov (United States)

    Shaw-Stewart, James; Mattle, Thomas; Lippert, Thomas; Nagel, Matthias; Nüesch, Frank; Wokaun, Alexander

    2013-08-01

    Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.

  12. The optimisation of the laser-induced forward transfer process for fabrication of polyfluorene-based organic light-emitting diode pixels

    Energy Technology Data Exchange (ETDEWEB)

    Shaw-Stewart, James, E-mail: james.shaw-stewart@ed.ac.uk [Materials Group, General Energies Department, Paul Scherrer Institut, CH-5232 Villigen-PSI (Switzerland); Laboratory for Functional Polymers, Empa Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland); Mattle, Thomas [Materials Group, General Energies Department, Paul Scherrer Institut, CH-5232 Villigen-PSI (Switzerland); Lippert, Thomas, E-mail: thomas.lippert@psi.ch [Materials Group, General Energies Department, Paul Scherrer Institut, CH-5232 Villigen-PSI (Switzerland); Nagel, Matthias [Laboratory for Functional Polymers, Empa Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland); Nüesch, Frank, E-mail: frank.nueesch@empa.ch [Laboratory for Functional Polymers, Empa Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland); Section de science et génie des matériaux, EPFL, CH-1015 Lausanne (Switzerland); Wokaun, Alexander [Materials Group, General Energies Department, Paul Scherrer Institut, CH-5232 Villigen-PSI (Switzerland)

    2013-08-01

    Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.

  13. An Energy Efficiency Evaluation Method Based on Energy Baseline for Chemical Industry

    OpenAIRE

    Yao, Dong-mei; Zhang, Xin; Wang, Ke-feng; Zou, Tao; Wang, Dong; Qian, Xin-hua

    2016-01-01

    According to the requirements and structure of ISO 50001 energy management system, this study proposes an energy efficiency evaluation method based on energy baseline for chemical industry. Using this method, the energy plan implementation effect in the processes of chemical production can be evaluated quantitatively, and evidences for system fault diagnosis can be provided. This method establishes the energy baseline models which can meet the demand of the different kinds of production proce...

  14. Milk bottom-up proteomics: method optimisation.

    Directory of Open Access Journals (Sweden)

    Delphine eVincent

    2016-01-01

    Full Text Available Milk is a complex fluid whose proteome displays a diverse set of proteins of high abundance such as caseins and medium to low abundance whey proteins such as ß-lactoglobulin, lactoferrin, immunoglobulins, glycoproteins, peptide hormones and enzymes. A sample preparation method that enables high reproducibility and throughput is key in reliably identifying proteins present or proteins responding to conditions such as a diet, health or genetics. Using skim milk samples from Jersey and Holstein-Friesian cows, we compared three extraction procedures which have not previously been applied to samples of cows’ milk. Method A (urea involved a simple dilution of the milk in a urea-based buffer, method B (TCA/acetone involved a trichloroacetic acid (TCA/acetone precipitation and method C (methanol/chloroform involved a tri-phasic partition method in chloroform/methanol solution. Protein assays, SDS-PAGE profiling, and trypsin digestion followed by nanoHPLC-electrospray ionisation-tandem mass spectrometry (nLC-ESI-MS/MS analyses were performed to assess their efficiency. Replicates were used at each analytical step (extraction, digestion, injection to assess reproducibility. Mass spectrometry (MS data are available via ProteomeXchange with identifier PXD002529. Overall 186 unique accessions, major and minor proteins, were identified with a combination of methods. Method C (methanol/chloroform yielded the best resolved SDS-patterns and highest protein recovery rates, method A (urea yielded the greatest number of accessions, and, of the three procedures, method B (TCA/acetone was the least compatible of all with a wide range of downstream analytical procedures. Our results also highlighted breed differences between the proteins in milk of Jersey and Holstein-Friesian cows.

  15. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Maheri, Alireza

    2014-01-01

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  16. Optimising of Steel Fiber Reinforced Concrete Mix Design | Beddar ...

    African Journals Online (AJOL)

    Optimising of Steel Fiber Reinforced Concrete Mix Design. ... as a result of the loss of mixture workability that will be translated into a difficult concrete casting in site. ... An experimental study of an optimisation method of fibres in reinforced ...

  17. Use of artificial intelligence techniques for optimisation of co-combustion of coal with biomass

    Energy Technology Data Exchange (ETDEWEB)

    Tan, C.K.; Wilcox, S.J.; Ward, J. [University of Glamorgan, Pontypridd (United Kingdom). Division of Mechanical Engineering

    2006-03-15

    The optimisation of burner operation in conventional pulverised-coal-fired boilers for co-combustion applications represents a significant challenge This paper describes a strategic framework in which Artificial Intelligence (AI) techniques can be applied to solve such an optimisation problem. The effectiveness of the proposed system is demonstrated by a case study that simulates the co-combustion of coal with sewage sludge in a 500-kW pilot-scale combustion rig equipped with a swirl stabilised low-NOx burner. A series of Computational Fluid Dynamics (CFD) simulations were performed to generate data for different operating conditions, which were then used to train several Artificial Neural Networks (ANNs) to predict the co-combustion performance. Once trained, the ANNs were able to make estimations of unseen situations in a fraction of the time taken by the CFD simulation. Consequently, the networks were capable of representing the underlying physics of the CFD models and could be executed efficiently for a large number of iterations as required by optimisation techniques based on Evolutionary Algorithms (EAs). Four operating parameters of the burner, namely the swirl angles and flow rates of the secondary and tertiary combustion air were optimised with the objective of minimising the NOx and CO emissions as well as the unburned carbon at the furnace exit. The results suggest that ANNs combined with EAs provide a useful tool for optimising co-combustion processes.

  18. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.

    Science.gov (United States)

    Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-03-01

    The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants

  19. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    Science.gov (United States)

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  20. A method to identify energy efficiency measures for factory systems based on qualitative modeling

    CERN Document Server

    Krones, Manuela

    2017-01-01

    Manuela Krones develops a method that supports factory planners in generating energy-efficient planning solutions. The method provides qualitative description concepts for factory planning tasks and energy efficiency knowledge as well as an algorithm-based linkage between these measures and the respective planning tasks. Its application is guided by a procedure model which allows a general applicability in the manufacturing sector. The results contain energy efficiency measures that are suitable for a specific planning task and reveal the roles of various actors for the measures’ implementation. Contents Driving Concerns for and Barriers against Energy Efficiency Approaches to Increase Energy Efficiency in Factories Socio-Technical Description of Factory Planning Tasks Description of Energy Efficiency Measures Case Studies on Welding Processes and Logistics Systems Target Groups Lecturers and Students of Industrial Engineering, Production Engineering, Environmental Engineering, Mechanical Engineering Practi...

  1. Optimised heat recovery steam generators for integrated solar combined cycle plants

    Science.gov (United States)

    Peterseim, Jürgen H.; Huschka, Karsten

    2017-06-01

    The cost of concentrating solar power (CSP) plants is decreasing but, due to the cost differences and the currently limited value of energy storage, implementation of new facilities is still slow compared to photovoltaic systems. One recognized option to lower cost instantly is the hybridization of CSP with other energy sources, such as natural gas or biomass. Various references exist for the combination of CSP with natural gas in combined cycle plants, also known as Integrated Solar Combined Cycle (ISCC) plants. One problem with current ISCC concepts is the so called ISCC crisis, which occurs when CSP is not contributing and cycle efficiency falls below efficiency levels of solely natural gas only fired combined cycle plants. This paper analyses current ISCC concepts and compares them with two optimised designs. The comparison is based on a Kuraymat type ISCC plant and shows that cycle optimization enables a net capacity increase of 1.4% and additional daily generation of up to 7.9%. The specific investment of the optimised Integrated Solar Combined Cycle plant results in a 0.4% cost increase, which is below the additional net capacity and daily generation increase.

  2. Utility systems operation: Optimisation-based decision making

    International Nuclear Information System (INIS)

    Velasco-Garcia, Patricia; Varbanov, Petar Sabev; Arellano-Garcia, Harvey; Wozny, Guenter

    2011-01-01

    Utility systems provide heat and power to industrial sites. The importance of operating these systems in an optimal way has increased significantly due to the unstable and in the long term rising prices of fossil fuels as well as the need for reducing the greenhouse gas emissions. This paper presents an analysis of the problem for supporting operator decision making under conditions of variable steam demands from the production processes on an industrial site. An optimisation model has been developed, where besides for running the utility system, also the costs associated with starting-up the operating units have been modelled. The illustrative case study shows that accounting for the shut-downs and start-ups of utility operating units can bring significant cost savings. - Highlights: → Optimisation methodology for decision making on running utility systems. → Accounting for varying steam demands. → Optimal operating specifications when a demand change occurs. → Operating costs include start-up costs of boilers and other units. → Validated on a real-life case study. Up to 20% cost savings are possible.

  3. Improving linear transport infrastructure efficiency by automated learning and optimised predictive maintenance techniques (INFRALERT)

    Science.gov (United States)

    Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele

    2017-09-01

    The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.

  4. Hydroxyapatite, fluor-hydroxyapatite and fluorapatite produced via the sol-gel method. Optimisation, characterisation and rheology.

    Science.gov (United States)

    Tredwin, Christopher J; Young, Anne M; Georgiou, George; Shin, Song-Hee; Kim, Hae-Won; Knowles, Jonathan C

    2013-02-01

    Currently, most titanium implant coatings are made using hydroxyapatite and a plasma spraying technique. There are however limitations associated with plasma spraying processes including poor adherence, high porosity and cost. An alternative method utilising the sol-gel technique offers many potential advantages but is currently lacking research data for this application. It was the objective of this study to characterise and optimise the production of Hydroxyapatite (HA), fluorhydroxyapatite (FHA) and fluorapatite (FA) using a sol-gel technique and assess the rheological properties of these materials. HA, FHA and FA were synthesised by a sol-gel method. Calcium nitrate and triethylphosphite were used as precursors under an ethanol-water based solution. Different amounts of ammonium fluoride (NH4F) were incorporated for the preparation of the sol-gel derived FHA and FA. Optimisation of the chemistry and subsequent characterisation of the sol-gel derived materials was carried out using X-ray Diffraction (XRD) and Differential Thermal Analysis (DTA). Rheology of the sol-gels was investigated using a viscometer and contact angle measurement. A protocol was established that allowed synthesis of HA, FHA and FA that were at least 99% phase pure. The more fluoride incorporated into the apatite structure; the lower the crystallisation temperature, the smaller the unit cell size (changes in the a-axis), the higher the viscosity and contact angle of the sol-gel derived apatite. A technique has been developed for the production of HA, FHA and FA by the sol-gel technique. Increasing fluoride substitution in the apatite structure alters the potential coating properties. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  5. Genetic algorithms and artificial neural networks for loading pattern optimisation of advanced gas-cooled reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ziver, A.K. E-mail: a.k.ziver@imperial.ac.uk; Pain, C.C; Carter, J.N.; Oliveira, C.R.E. de; Goddard, A.J.H.; Overton, R.S

    2004-03-01

    A non-generational genetic algorithm (GA) has been developed for fuel management optimisation of Advanced Gas-Cooled Reactors, which are operated by British Energy and produce around 20% of the UK's electricity requirements. An evolutionary search is coded using the genetic operators; namely selection by tournament, two-point crossover, mutation and random assessment of population for multi-cycle loading pattern (LP) optimisation. A detailed description of the chromosomes in the genetic algorithm coded is presented. Artificial Neural Networks (ANNs) have been constructed and trained to accelerate the GA-based search during the optimisation process. The whole package, called GAOPT, is linked to the reactor analysis code PANTHER, which performs fresh fuel loading, burn-up and power shaping calculations for each reactor cycle by imposing station-specific safety and operational constraints. GAOPT has been verified by performing a number of tests, which are applied to the Hinkley Point B and Hartlepool reactors. The test results giving loading pattern (LP) scenarios obtained from single and multi-cycle optimisation calculations applied to realistic reactor states of the Hartlepool and Hinkley Point B reactors are discussed. The results have shown that the GA/ANN algorithms developed can help the fuel engineer to optimise loading patterns in an efficient and more profitable way than currently available for multi-cycle refuelling of AGRs. Research leading to parallel GAs applied to LP optimisation are outlined, which can be adapted to present day LWR fuel management problems.

  6. Application of the adjoint optimisation of shock control bump for ONERA-M6 wing

    Science.gov (United States)

    Nejati, A.; Mazaheri, K.

    2017-11-01

    This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.

  7. Infrastructure optimisation via MBR retrofit: a design guide.

    Science.gov (United States)

    Bagg, W K

    2009-01-01

    Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.

  8. Calorimetric Measurement for Internal Conversion Efficiency of Photovoltaic Cells/Modules Based on Electrical Substitution Method

    Science.gov (United States)

    Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato

    2018-02-01

    We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.

  9. Investigation of Heat Sink Efficiency for Electronic Component Cooling Applications

    DEFF Research Database (Denmark)

    Staliulionis, Ž.; Zhang, Zhe; Pittini, Riccardo

    2014-01-01

    Research and optimisation of cooling of electronic components using heat sinks becomes increasingly important in modern industry. Numerical methods with experimental real-world verification are the main tools to evaluate efficiency of heat sinks or heat sink systems. Here the investigation...... of relatively simple heat sink application is performed using modeling based on finite element method, and also the potential of such analysis was demonstrated by real-world measurements and comparing obtained results. Thermal modeling was accomplished using finite element analysis software COMSOL and thermo...

  10. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.

    Science.gov (United States)

    Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M

    2018-03-01

    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Efficiency profile method to study the hit efficiency of drift chambers

    International Nuclear Information System (INIS)

    Abyzov, A.; Bel'kov, A.; Lanev, A.; Spiridonov, A.; Walter, M.; Hulsbergen, W.

    2002-01-01

    A method based on the usage of efficiency profile is proposed to estimate the hit efficiency of drift chambers with a large number of channels. The performance of the method under real conditions of the detector operation has been tested analysing the experimental data from the HERA-B drift chambers

  12. Optimisation of X-ray examinations: General principles and an Irish perspective

    International Nuclear Information System (INIS)

    Matthews, Kate; Brennan, Patrick C.

    2009-01-01

    In Ireland, the European Medical Exposures Directive [Council Directive 97/43] was enacted into national law in Statutory Instrument 478 of 2002. This series of three review articles discusses the status of justification and optimisation of X-ray examinations nationally, and progress with the establishment of Irish diagnostic reference levels. In this second article, literature relating to optimisation issues arising in SI 478 of 2002 is reviewed. Optimisation associated with X-ray equipment and optimisation during day-to-day practice are considered. Optimisation proposals found in published research are summarised, and indicate the complex nature of optimisation. A paucity of current, research-based guidance documentation is identified. This is needed in order to support a range of professional staff in their practical implementation of optimisation.

  13. Characterisation and optimisation of a method for the detection and quantification of atmospherically relevant carbonyl compounds in aqueous medium

    Science.gov (United States)

    Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.

    2015-01-01

    Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds (VOC). Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. Thus a method was systematically characterised and improved to quantify carbonyl compounds. Quantification with the present method can be carried out for each carbonyl compound sampled in the aqueous phase regardless of their source. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). The main advantage of the improved method presented in this study is the low detection limit in the range of 0.01 and 0.17 μmol L-1 depending on carbonyl compounds. Furthermore best results were found for extraction with dichloromethane for 30 min followed by derivatisation with PFBHA for 24 h with 0.43 mg mL-1 PFBHA at a pH value of 3. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione.

  14. Topology optimised photonic crystal waveguide intersections with high-transmittance and low crosstalk

    DEFF Research Database (Denmark)

    Ikeda, N; Sugimoto, Y; Watanabe, Y

    2006-01-01

    Numerical and experimental studies on the photonic crystal waveguide intersection based on the topology optimisation design method are reported and the effectiveness of this technique is shown by achieving high transmittance spectra with low crosstalk for the straightforward beam-propagation line...

  15. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    Science.gov (United States)

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  16. Optimising stochastic trajectories in exact quantum jump approaches of interacting systems

    International Nuclear Information System (INIS)

    Lacroix, D.

    2004-11-01

    The standard methods used to substitute the quantum dynamics of two interacting systems by a quantum jump approach based on the Stochastic Schroedinger Equation (SSE) are described. It turns out that for a given situation, there exists an infinite number of SSE reformulation. This fact is used to propose general strategies to optimise the stochastic paths in order to reduce the statistical fluctuations. In this procedure, called the 'adaptative noise method', a specific SSE is obtained for which the noise depends explicitly on both the initial state and on the properties of the interaction Hamiltonian. It is also shown that this method can be further improved by the introduction of a mean-field dynamics. The different optimisation procedures are illustrated quantitatively in the case of interacting spins. A significant reduction of the statistical fluctuations is obtained. Consequently, a much smaller number of trajectories is needed to accurately reproduce the exact dynamics as compared to the standard SSE method. (author)

  17. Artificial Intelligence Mechanisms on Interactive Modified Simplex Method with Desirability Function for Optimising Surface Lapping Process

    Directory of Open Access Journals (Sweden)

    Pongchanun Luangpaiboon

    2014-01-01

    Full Text Available A study has been made to optimise the influential parameters of surface lapping process. Lapping time, lapping speed, downward pressure, and charging pressure were chosen from the preliminary studies as parameters to determine process performances in terms of material removal, lap width, and clamp force. The desirability functions of the-nominal-the-best were used to compromise multiple responses into the overall desirability function level or D response. The conventional modified simplex or Nelder-Mead simplex method and the interactive desirability function are performed to optimise online the parameter levels in order to maximise the D response. In order to determine the lapping process parameters effectively, this research then applies two powerful artificial intelligence optimisation mechanisms from harmony search and firefly algorithms. The recommended condition of (lapping time, lapping speed, downward pressure, and charging pressure at (33, 35, 6.0, and 5.0 has been verified by performing confirmation experiments. It showed that the D response level increased to 0.96. When compared with the current operating condition, there is a decrease of the material removal and lap width with the improved process performance indices of 2.01 and 1.14, respectively. Similarly, there is an increase of the clamp force with the improved process performance index of 1.58.

  18. Application of ant colony optimisation in distribution transformer sizing

    African Journals Online (AJOL)

    This study proposes an optimisation method for transformer sizing in power system using ant colony optimisation and a verification of the process by MATLAB software. The aim is to address the issue of transformer sizing which is a major challenge affecting its effective performance, longevity, huge capital cost and power ...

  19. Synthesis of A Sustainable Sago-Based Value Chain via Fuzzy Optimisation Approach

    Directory of Open Access Journals (Sweden)

    Chong Jeffrey Hong Seng

    2018-01-01

    Full Text Available Sago starch is one of the staple foods for human, especially in Asia’s Region. It can be produced via sago starch extraction process (SSEP. During the SSEP, several types of sago wastes are generated such as sago fiber (SF, sago bark (SB and sago wastewater (SW. With the increase in production of existing factories and sago mills, the sago industrial practice in waste disposal management is gaining more attention, thus implementation of effective waste management is vital. One of the promising ways to have effective waste management is to create value out of the sago wastes. In a recent study, sago-based refinery, which is a facility to convert sago wastes into value-added products (e.g., bio-ethanol and energy was found feasible. However, the conversion of other value added products from sago wastes while considering the environmental impact has not been considered in sago value chain. Therefore, an optimum sago value chain, which involved conversion activities of sago wastes into value-added products, is aimed to be synthesised in this work. The optimum sago value chain will be evaluated based on profit and carbon emissions using fuzzy-based optimisation approach via a commercial optimisation software, Lingo 16.0. To illustrate the the developed approach, an industrial case study has been solved in this work.

  20. SINGLE FIXED CRANE OPTIMISATION WITHIN A DISTRIBUTION CENTRE

    Directory of Open Access Journals (Sweden)

    J. Matthews

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper considersthe optimisation of the movement of a fixed crane operating in a single aisle of a distribution centre. The crane must move pallets in inventory between docking bays, storage locations, and picking lines. Both a static and a dynamic approach to the problem are presented. The optimisation is performed by means of tabu search, ant colony metaheuristics,and hybrids of these two methods. All these solution approaches were tested on real life data obtained from an operational distribution centre. Results indicate that the hybrid methods outperform the other approaches.

    AFRIKAANSE OPSOMMING: Die optimisering van die beweging van 'n vaste hyskraan in 'n enkele gang van 'n distribusiesentrum word in hierdie artikel beskou. Die hyskraan moet pallette vervoer tussen dokhokke, stoorposisies, en opmaaklyne. Beide 'n statiese en 'n dinamiese benadering tot die probleem word aangebied. Die optimisering word gedoen met behulp van tabu-soektogte, mierkolonieoptimisering,en hibriede van hierdie twee metodes. Al die oplossingsbenaderings is getoets met werklike data wat van 'n operasionele distribusiesentrum verkry is. Die resultate toon aan dat die hibriedmetodes die beste oplossings lewer.

  1. Methods of Increasing the Performance of Radionuclide Generators Used in Nuclear Medicine: Daughter Nuclide Build-Up Optimisation, Elution-Purification-Concentration Integration, and Effective Control of Radionuclidic Purity

    Directory of Open Access Journals (Sweden)

    Van So Le

    2014-06-01

    Full Text Available Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of “early elution schedule” was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement

  2. Achieving a Sustainable Urban Form through Land Use Optimisation: Insights from Bekasi City’s Land-Use Plan (2010–2030

    Directory of Open Access Journals (Sweden)

    Rahmadya Trias Handayanto

    2017-02-01

    Full Text Available Cities worldwide have been trying to achieve a sustainable urban form to handle their rapid urban growth. Many sustainable urban forms have been studied and two of them, the compact city and the eco city, were chosen in this study as urban form foundations. Based on these forms, four sustainable city criteria (compactness, compatibility, dependency, and suitability were considered as necessary functions for land use optimisation. This study presents a land use optimisation as a method for achieving a sustainable urban form. Three optimisation methods (particle swarm optimisation, genetic algorithms, and a local search method were combined into a single hybrid optimisation method for land use in Bekasi city, Indonesia. It was also used for examining Bekasi city’s land-use-plan (2010–2030 after optimising current (2015 and future land use (2030. After current land use optimisation, the score of sustainable city criteria increased significantly. Three important centres of land use (commercial, industrial, and residential were also created through clustering the results. These centres were slightly different from centres of the city plan zones. Additional land uses in 2030 were predicted using a nonlinear autoregressive neural network with external input. Three scenarios were used for allocating these additional land uses including sustainable development, government policy, and business-as-usual. Future land use allocation in 2030 found that the sustainable development scenario showed better performance compared to government policy and business-as-usual scenarios.

  3. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation

    Science.gov (United States)

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230

  4. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    Directory of Open Access Journals (Sweden)

    Hazlee Azil Illias

    Full Text Available Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM with modified evolutionary particle swarm optimisation (EPSO algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO-Time Varying Acceleration Coefficient (TVAC technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  5. Identification of transformer fault based on dissolved gas analysis using hybrid support vector machine-modified evolutionary particle swarm optimisation.

    Science.gov (United States)

    Illias, Hazlee Azil; Zhao Liang, Wee

    2018-01-01

    Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.

  6. Method for a component-based economic optimisation in design of whole building renovation versus demolishing and rebuilding

    International Nuclear Information System (INIS)

    Morelli, Martin; Harrestrup, Maria; Svendsen, Svend

    2014-01-01

    Aim: This paper presents a two-fold evaluation method determining whether to renovate an existing building or to demolish it and thereafter erect a new building. Scope: The method determines a combination of energy saving measures that have been optimised in regards to the future cost for energy. Subsequently, the method evaluates the cost of undertaking the retrofit measures as compared to the cost of demolishing the existing building and thereafter erecting a new one. Several economically beneficial combinations of energy saving measures can be determined. All of them are a trade-off between investing in retrofit measures and buying renewable energy. The overall cost of the renovation considers the market value of the property, the investment in the renovation, the operational and maintenance costs. A multi-family building is used as an example to clearly illustrate the application of the method from macroeconomic and private financial perspectives. Conclusion: The example shows that the investment cost and future market value of the building are the dominant factors in deciding whether to renovate an existing building or to demolish it and thereafter erect a new building. Additionally, it is concluded in the example that multi-family buildings erected in the period 1850–1930 should be renovated. - highlights: • Development of a method for evaluation of renovation projects. • Determination of an economic optimal combination of various energy saving measures. • The method compared the renovation cost to those for demolishing and building new. • Decision was highly influence by the investment cost and buildings market value. • The results indicate that buildings should be renovated and not demolished

  7. A highly efficient pricing method for European-style options based on Shannon wavelets

    NARCIS (Netherlands)

    L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn the search for robust, accurate and highly efficient financial option valuation techniques, we present here the SWIFT method (Shannon Wavelets Inverse Fourier Technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative

  8. Cost evaluation to optimise radiation therapy implementation in different income settings: A time-driven activity-based analysis.

    Science.gov (United States)

    Van Dyk, Jacob; Zubizarreta, Eduardo; Lievens, Yolande

    2017-11-01

    With increasing recognition of growing cancer incidence globally, efficient means of expanding radiotherapy capacity is imperative, and understanding the factors impacting human and financial needs is valuable. A time-driven activity-based costing analysis was performed, using a base case of 2-machine departments, with defined cost inputs and operating parameters. Four income groups were analysed, ranging from low to high income. Scenario analyses included department size, operating hours, fractionation, treatment complexity, efficiency, and centralised versus decentralised care. The base case cost/course is US$5,368 in HICs, US$2,028 in LICs; the annual operating cost is US$4,595,000 and US$1,736,000, respectively. Economies of scale show cost/course decreasing with increasing department size, mainly related to the equipment cost and most prominent up to 3 linacs. The cost in HICs is two or three times as high as in U-MICs or LICs, respectively. Decreasing operating hours below 8h/day has a dramatic impact on the cost/course. IMRT increases the cost/course by 22%. Centralising preparatory activities has a moderate impact on the costs. The results indicate trends that are useful for optimising local and regional circumstances. This methodology can provide input into a uniform and accepted approach to evaluating the cost of radiotherapy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  9. Optimisation of a novel trailing edge concept for a high lift device

    CSIR Research Space (South Africa)

    Botha, JDM

    2014-09-01

    Full Text Available A novel concept (referred to as the flap extension) is implemented on the leading edge of the flap of a three element high lift device. The concept is optimised using two optimisation approaches based on Genetic Algorithm optimisations. A zero order...

  10. A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique

    Directory of Open Access Journals (Sweden)

    Bo-Ao Xu

    2016-01-01

    Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.

  11. Distributed optimisation problem with communication delay and external disturbance

    Science.gov (United States)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  12. A Bayesian Approach for Sensor Optimisation in Impact Identification

    Directory of Open Access Journals (Sweden)

    Vincenzo Mallardo

    2016-11-01

    Full Text Available This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence.

  13. Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points

    Directory of Open Access Journals (Sweden)

    Qiang Hou

    2018-01-01

    Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.

  14. Methodology implementation for multi objective optimisation for nuclear fleet evolution scenarios

    International Nuclear Information System (INIS)

    Freynet, David

    2016-01-01

    The issue of the evolution French nuclear fleet can be considered through the study of nuclear transition scenarios. These studies are of paramount importance as their results can greatly affect the decision making process, given that they take into account industrial concerns, investments, time, and nuclear system complexity. Such studies can be performed with the COSI code (developed at the CEA/DEN), which enables the calculation of matter inventories and fluxes across the fuel cycle (nuclear reactors and associated facilities), especially when coupled with the CESAR depletion code. The studies today performed with COSI require the definition of the various scenarios' input parameters, in order to fulfil different objectives such as minimising natural uranium consumption, waste production and so on. These parameters concern the quantities and the scheduling of spent fuel destined for reprocessing, and the number, the type and the commissioning dates of deployed reactors.This work aims to develop, validate and apply an optimisation methodology coupled with COSI, in order to determine optimal nuclear transition scenarios for a multi-objective platform. Firstly, this methodology is based on the acceleration of scenario evaluation, enabling the use of optimisation methods in a reasonable time-frame. With this goal in mind, artificial neural network irradiation surrogate models are created with the URANIE platform (developed at the CEA/DEN) and are implemented within COSI. The next step in this work is to use, adapt and compare different optimisation methods, such as URANIE's genetic algorithm and particle swarm methods, in order to define a methodology suited to this type of study. This methodology development is based on an incremental approach which progressively adds objectives, constraints and decision variables to the optimisation problem definition. The variables added, which are related to reactor deployment and spent fuel reprocessing strategies, are chosen

  15. Pre-segmented 2-Step IMRT with subsequent direct machine parameter optimisation – a planning study

    International Nuclear Information System (INIS)

    Bratengeier, Klaus; Meyer, Jürgen; Flentje, Michael

    2008-01-01

    Modern intensity modulated radiotherapy (IMRT) mostly uses iterative optimisation methods. The integration of machine parameters into the optimisation process of step and shoot leaf positions has been shown to be successful. For IMRT segmentation algorithms based on the analysis of the geometrical structure of the planning target volumes (PTV) and the organs at risk (OAR), the potential of such procedures has not yet been fully explored. In this work, 2-Step IMRT was combined with subsequent direct machine parameter optimisation (DMPO-Raysearch Laboratories, Sweden) to investigate this potential. In a planning study DMPO on a commercial planning system was compared with manual primary 2-Step IMRT segment generation followed by DMPO optimisation. 15 clinical cases and the ESTRO Quasimodo phantom were employed. Both the same number of optimisation steps and the same set of objective values were used. The plans were compared with a clinical DMPO reference plan and a traditional IMRT plan based on fluence optimisation and consequent segmentation. The composite objective value (the weighted sum of quadratic deviations of the objective values and the related points in the dose volume histogram) was used as a measure for the plan quality. Additionally, a more extended set of parameters was used for the breast cases to compare the plans. The plans with segments pre-defined with 2-Step IMRT were slightly superior to DMPO alone in the majority of cases. The composite objective value tended to be even lower for a smaller number of segments. The total number of monitor units was slightly higher than for the DMPO-plans. Traditional IMRT fluence optimisation with subsequent segmentation could not compete. 2-Step IMRT segmentation is suitable as starting point for further DMPO optimisation and, in general, results in less complex plans which are equal or superior to plans generated by DMPO alone

  16. A Comfort-Aware Energy Efficient HVAC System Based on the Subspace Identification Method

    Directory of Open Access Journals (Sweden)

    O. Tsakiridis

    2016-01-01

    Full Text Available A proactive heating method is presented aiming at reducing the energy consumption in a HVAC system while maintaining the thermal comfort of the occupants. The proposed technique fuses time predictions for the zones’ temperatures, based on a deterministic subspace identification method, and zones’ occupancy predictions, based on a mobility model, in a decision scheme that is capable of regulating the balance between the total energy consumed and the total discomfort cost. Simulation results for various occupation-mobility models demonstrate the efficiency of the proposed technique.

  17. Optimised performance of industrial high resolution computerised tomography

    International Nuclear Information System (INIS)

    Maangaard, M.

    2000-01-01

    The purpose of non-destructive evaluation (NDE) is to acquire knowledge of the investigated sample. Digital x-ray imaging techniques such as radiography or computerised tomography (CT) produce images of the interior of a sample. The obtained image quality determines the possibility of detecting sample related features, e.g. details and flaws. This thesis presents a method of optimising the performance of industrial X-ray equipment for the imaging task at issue in order to obtain images with high quality. CT produces maps of the X-ray linear attenuation of the sample's interior. CT can produce two dimensional cross-section images or three-dimensional images with volumetric information on the investigated sample. The image contrast and noise depend on both the investigated sample and the equipment and settings used (X-ray tube potential, X-ray filtration, exposure time, etc.). Hence, it is vital to find the optimal equipment settings in order to obtain images of high quality. To be able to mathematically optimise the image quality, it is necessary to have a model of the X-ray imaging system together with an appropriate measure of image quality. The optimisation is performed with a developed model for an X-ray image-intensifier-based radiography system. The model predicts the mean value and variance of the measured signal level in the collected radiographic images. The traditionally used measure of physical image quality is the signal-to-noise ratio (SNR). To calculate the signal-to-noise ratio, a well-defined detail (flaw) is required. It was found that maximising the SNR leads to ambiguities, the optimised settings found by maximising the SNR were dependent on the material in the detail. When CT is performed on irregular shaped samples containing density and compositional variations, it is difficult to define which SNR to use for optimisation. This difficulty is solved by the measures of physical image quality proposed here, the ratios geometry

  18. Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context

    Science.gov (United States)

    Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian

    2016-05-01

    The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations

  19. Credit price optimisation within retail banking

    African Journals Online (AJOL)

    2014-02-14

    Feb 14, 2014 ... cost based pricing, where the price of a product or service is based on the .... function obtained from fitting a logistic regression model .... Note that the proposed optimisation approach below will allow us to also incorporate.

  20. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    Science.gov (United States)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  1. An operations research and simulation based study on improving the efficiency of a slurry drying tower

    Directory of Open Access Journals (Sweden)

    De Jongh, E.

    2013-08-01

    Full Text Available This paper relates to a company that produces washing powders. The focus is on improving the efficiency of gas usage (per unit of powder produced in the furnace that produces hot air. This hot air is an integral part of washing powder production: it dries the viscous slurry and transforms it into the base powder used in all washing powders. The cost of gas is the factorys largest expense. This paper attempts to increase the productivity and profitability of the operations by applying operations research using MATLAB and the non-linear optimiser called SNOPT (sparse non-linear optimiser. Using these techniques, a proposed solution that aims to balance the amount of open space between spraying slurry, as well as the overlap of spraying slurry within the furnace, is obtained. This is achieved by optimising the positioning of the top layer of 24 lances. The placement of the bottom layer of lances is done by positioning them in the areas of biggest overlap. These improvements result in a positive impact on the amount of gas burnt within the furnace to dry slurry to powder.

  2. Development of efficient time-evolution method based on three-term recurrence relation

    International Nuclear Information System (INIS)

    Akama, Tomoko; Kobayashi, Osamu; Nanbu, Shinkoh

    2015-01-01

    The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function. Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost

  3. Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.

    Science.gov (United States)

    García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M

    2014-12-01

    Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Reliability analysis and optimisation of subsea compression system facing operational covariate stresses

    International Nuclear Information System (INIS)

    Okaro, Ikenna Anthony; Tao, Longbin

    2016-01-01

    This paper proposes an enhanced Weibull-Corrosion Covariate model for reliability assessment of a system facing operational stresses. The newly developed model is applied to a Subsea Gas Compression System planned for offshore West Africa to predict its reliability index. System technical failure was modelled by developing a Weibull failure model incorporating a physically tested corrosion profile as stress in order to quantify the survival rate of the system under additional operational covariates including marine pH, temperature and pressure. Using Reliability Block Diagrams and enhanced Fusell-Vesely formulations, the whole system was systematically decomposed to sub-systems to analyse the criticality of each component and optimise them. Human reliability was addressed using an enhanced barrier weighting method. A rapid degradation curve is obtained on a subsea system relative to the base case subjected to a time-dependent corrosion stress factor. It reveals that subsea system components failed faster than their Mean time to failure specifications from Offshore Reliability Database as a result of cumulative marine stresses exertion. The case study demonstrated that the reliability of a subsea system can be systematically optimised by modelling the system under higher technical and organisational stresses, prioritising the critical sub-systems and making befitting provisions for redundancy and tolerances. - Highlights: • Novel Weibull Corrosion-Covariate model for reliability analysis of subsea assets. • Predict the accelerated degradation profile of a subsea gas compression. • An enhanced optimisation method based on Fusell-Vesely decomposition process. • New optimisation approach for smoothening of over- and under-designed components. • Demonstrated a significant improvement in producing more realistic failure rate.

  5. Optimisation of laser welding parameters for welding of P92 material using Taguchi based grey relational analysis

    Directory of Open Access Journals (Sweden)

    Shanmugarajan B.

    2016-08-01

    Full Text Available Creep strength enhanced ferritic (CSEF steels are used in advanced power plant systems for high temperature applications. P92 (Cr–W–Mo–V steel, classified under CSEF steels, is a candidate material for piping, tubing, etc., in ultra-super critical and advanced ultra-super critical boiler applications. In the present work, laser welding process has been optimised for P92 material by using Taguchi based grey relational analysis (GRA. Bead on plate (BOP trials were carried out using a 3.5 kW diffusion cooled slab CO2 laser by varying laser power, welding speed and focal position. The optimum parameters have been derived by considering the responses such as depth of penetration, weld width and heat affected zone (HAZ width. Analysis of variance (ANOVA has been used to analyse the effect of different parameters on the responses. Based on ANOVA, laser power of 3 kW, welding speed of 1 m/min and focal plane at −4 mm have evolved as optimised set of parameters. The responses of the optimised parameters obtained using the GRA have been verified experimentally and found to closely correlate with the predicted value.

  6. Combining simulation and multi-objective optimisation for equipment quantity optimisation in container terminals

    OpenAIRE

    Lin, Zhougeng

    2013-01-01

    This thesis proposes a combination framework to integrate simulation and multi-objective optimisation (MOO) for container terminal equipment optimisation. It addresses how the strengths of simulation and multi-objective optimisation can be integrated to find high quality solutions for multiple objectives with low computational cost. Three structures for the combination framework are proposed respectively: pre-MOO structure, integrated MOO structure and post-MOO structure. The applications of ...

  7. Discontinuous permeable adsorptive barrier design and cost analysis: a methodological approach to optimisation.

    Science.gov (United States)

    Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino

    2017-09-19

    The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.

  8. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    Science.gov (United States)

    Dominique, Stephane

    genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  9. Solving dynamic multi-objective problems with vector evaluated particle swarm optimisation

    CSIR Research Space (South Africa)

    Greeff, M

    2008-06-01

    Full Text Available Many optimisation problems are multi-objective and change dynamically. Many methods use a weighted average approach to the multiple objectives. This paper introduces the usage of the vector evaluated particle swarm optimiser (VEPSO) to solve dynamic...

  10. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    International Nuclear Information System (INIS)

    Thøgersen, E; Tranberg, B; Greiner, M; Herp, J

    2017-01-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms. (paper)

  11. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    Science.gov (United States)

    Thøgersen, E.; Tranberg, B.; Herp, J.; Greiner, M.

    2017-05-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms.

  12. Research and development of methods and technologies for CO2 capture in fossil fuel power plants and storage in geological formations in the Czech Republic, stage 2.3. Conceptual proposal for technological solution for application of the oxyfuel method. Revision 0

    International Nuclear Information System (INIS)

    Dlouhy, Tomas

    2010-12-01

    Technological solution for application of the oxyfuel method at a typical power unit was proposed and optimised. Based on comparison of options, wet recirculation taken downstream of the electrostatic fly ash separator was selected. Suppressing false air suction into the facility to the minimum is imperative. The selected type design of the oxyfuel technology was integrated into the typical thermal design of the power unit in order to assess the effects on its efficiency. Calculations suggest that the variant with a boiler modified to accommodate the oxyfuel technology may attain a higher efficiency than the unit based on combustion with air. The thermal design was optimised, i.e. the flue gases which were too hot were cooled and the heat was fed to the feedwater preheater

  13. Optimisation of Protection as applicable to geological disposal: the ICRP view

    International Nuclear Information System (INIS)

    Weiss, W.

    2010-01-01

    Wolfgang Weiss (BfS), vice-chair of ICRP Committee 4, recalled that the role of optimisation is to select the best protection options under the prevailing circumstances based on scientific considerations, societal concerns and ethical aspects as well as considerations of transparency. An important role of the concept of optimisation of protection is to foster a 'safety culture' and thereby to engender a state of thinking in everyone responsible for control of radiation exposures, such that they are continuously asking themselves the question, 'Have I done all that I reasonably can to avoid or reduce these doses?' Clearly, the answer to this question is a matter of judgement and necessitates co-operation between all parties involved and, as a minimum, the operating management and the regulatory agencies, but the dialogue would be more complete if other stakeholders were also involved. What kinds of checks and balances or factors would be needed to be considered for an 'optimal' system? Can indicators be identified? Quantitative methods may provide input to this dialogue but they should never be the sole input. The ICRP considers that the parameters to take into account include also social considerations and values, environmental considerations, as well as technical and economic considerations. Wolfgang Weiss approached the question of the distinction to be made between system optimisation (in the sense of taking account of social and economic as well as of all types of hazards) and optimisation of radiological protection. The position of the ICRP is that the system of protection that it proposes is based on both science (quantification of the health risk) and value judgement (what is an acceptable risk?) and optimisation is the recommended process to integrate both aspects. Indeed, there has been evolution since the old system of intervention levels to the new system, whereby, even if the level of the dose or risk (which is called constraint in ICRP-81 ) is met

  14. MODELLING AND OPTIMISATION OF A BIMORPH PIEZOELECTRIC CANTILEVER BEAM IN AN ENERGY HARVESTING APPLICATION

    Directory of Open Access Journals (Sweden)

    CHUNG KET THEIN

    2016-02-01

    Full Text Available Piezoelectric materials are excellent transducers in converting vibrational energy into electrical energy, and vibration-based piezoelectric generators are seen as an enabling technology for wireless sensor networks, especially in selfpowered devices. This paper proposes an alternative method for predicting the power output of a bimorph cantilever beam using a finite element method for both static and dynamic frequency analyses. Experiments are performed to validate the model and the simulation results. In addition, a novel approach is presented for optimising the structure of the bimorph cantilever beam, by which the power output is maximised and the structural volume is minimised simultaneously. Finally, the results of the optimised design are presented and compared with other designs.

  15. GAOS: Spatial optimisation of crop and nature within agricultural fields

    NARCIS (Netherlands)

    Bruin, de S.; Janssen, H.; Klompe, A.; Lerink, P.; Vanmeulebrouk, B.

    2010-01-01

    This paper proposes and demonstrates a spatial optimiser that allocates areas of inefficient machine manoeuvring to field margins thus improving the use of available space and supporting map-based Controlled Traffic Farming. A prototype web service (GAOS) allows farmers to optimise tracks within

  16. Quantitative Efficiency Evaluation Method for Transportation Networks

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2014-11-01

    Full Text Available An effective evaluation of transportation network efficiency/performance is essential to the establishment of sustainable development in any transportation system. Based on a redefinition of transportation network efficiency, a quantitative efficiency evaluation method for transportation network is proposed, which could reflect the effects of network structure, traffic demands, travel choice, and travel costs on network efficiency. Furthermore, the efficiency-oriented importance measure for network components is presented, which can be used to help engineers identify the critical nodes and links in the network. The numerical examples show that, compared with existing efficiency evaluation methods, the network efficiency value calculated by the method proposed in this paper can portray the real operation situation of the transportation network as well as the effects of main factors on network efficiency. We also find that the network efficiency and the importance values of the network components both are functions of demands and network structure in the transportation network.

  17. Warpage optimisation on the moulded part with straight-drilled and conformal cooling channels using response surface methodology (RSM) and glowworm swarm optimisation (GSO)

    Science.gov (United States)

    Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.

    2017-09-01

    In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.

  18. Analysis and optimisation of a mixed fluid cascade (MFC) process

    Science.gov (United States)

    Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng

    2017-04-01

    A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.

  19. A novel low-power fluxgate sensor using a macroscale optimisation technique for space physics instrumentation

    Science.gov (United States)

    Dekoulis, G.; Honary, F.

    2007-05-01

    This paper describes the design of a novel low-power single-axis fluxgate sensor. Several soft magnetic alloy materials have been considered and the choice was based on the balance between maximum permeability and minimum saturation flux density values. The sensor has been modelled using the Finite Integration Theory (FIT) method. The sensor was imposed to a custom macroscale optimisation technique that significantly reduced the power consumption by a factor of 16. The results of the sensor's optimisation technique will be used, subsequently, in the development of a cutting-edge ground based magnetometer for the study of the complex solar wind-magnetospheric-ionospheric system.

  20. Auto-optimisation for three-dimensional conformal radiotherapy of nasopharyngeal carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Wu, V.W.C. E-mail: orvinwu@polyu.edu.hk; Kwong, D.W.L.; Sham, J.S.T.; Mui, A.W.L

    2003-08-01

    Purpose: The purpose of this study was to evaluate the application of auto-optimisation in the treatment planning of three-dimensional conformal radiotherapy (3DCRT) of nasopharyngeal carcinoma (NPC). Methods: Twenty-nine NPC patients were planned by both forward planning and auto-optimisation methods. The forward plans, which consisted of three coplanar facial fields, were produced according to the routine planning criteria. The auto-optimised plans, which consisted of 5-15 (median 9) fields, were generated by the planning system after prescribing the dose requirements and the importance weightings of the planning target volume and organs at risk. Plans produced by the two planning methods were compared by the dose volume histogram, tumour control probability (TCP), conformity index and normal tissue complication probability (NTCP). Results: The auto-optimised plans reduced the average planner's time by over 35 min. It demonstrated better TCP and conformity index than the forward plans (P=0.03 and 0.04, respectively). Besides, the parotid gland and temporo-mandibular (TM) joint were better spared with the mean dose reduction of 31.8 and 17.7%, respectively. The slight trade off was the mild dose increase in spinal cord and brain stem with their maximum doses remaining within the tolerance limits. Conclusions: The findings demonstrated the potentials of auto-optimisation for improving target dose and parotid sparing in the 3DCRT of NPC with saving of the planner's time.

  1. Auto-optimisation for three-dimensional conformal radiotherapy of nasopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Wu, V.W.C.; Kwong, D.W.L.; Sham, J.S.T.; Mui, A.W.L.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the application of auto-optimisation in the treatment planning of three-dimensional conformal radiotherapy (3DCRT) of nasopharyngeal carcinoma (NPC). Methods: Twenty-nine NPC patients were planned by both forward planning and auto-optimisation methods. The forward plans, which consisted of three coplanar facial fields, were produced according to the routine planning criteria. The auto-optimised plans, which consisted of 5-15 (median 9) fields, were generated by the planning system after prescribing the dose requirements and the importance weightings of the planning target volume and organs at risk. Plans produced by the two planning methods were compared by the dose volume histogram, tumour control probability (TCP), conformity index and normal tissue complication probability (NTCP). Results: The auto-optimised plans reduced the average planner's time by over 35 min. It demonstrated better TCP and conformity index than the forward plans (P=0.03 and 0.04, respectively). Besides, the parotid gland and temporo-mandibular (TM) joint were better spared with the mean dose reduction of 31.8 and 17.7%, respectively. The slight trade off was the mild dose increase in spinal cord and brain stem with their maximum doses remaining within the tolerance limits. Conclusions: The findings demonstrated the potentials of auto-optimisation for improving target dose and parotid sparing in the 3DCRT of NPC with saving of the planner's time

  2. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    Science.gov (United States)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  3. A Mechatronic Solution for Efficiency Optimisation

    DEFF Research Database (Denmark)

    Conrad, Finn; Hansen, M.R.; Andersen, T.O.

    2004-01-01

    This paper presents and discusses concepts concerning regeneration of potential energy in hydraulic forklift trucks. A conventional forklift system has been investigated for energy efficiency and compared to an investigated for energy efficiency and compared to an investigated for energy efficien...

  4. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  5. An Efficient Mesh Generation Method for Fractured Network System Based on Dynamic Grid Deformation

    Directory of Open Access Journals (Sweden)

    Shuli Sun

    2013-01-01

    Full Text Available Meshing quality of the discrete model influences the accuracy, convergence, and efficiency of the solution for fractured network system in geological problem. However, modeling and meshing of such a fractured network system are usually tedious and difficult due to geometric complexity of the computational domain induced by existence and extension of fractures. The traditional meshing method to deal with fractures usually involves boundary recovery operation based on topological transformation, which relies on many complicated techniques and skills. This paper presents an alternative and efficient approach for meshing fractured network system. The method firstly presets points on fractures and then performs Delaunay triangulation to obtain preliminary mesh by point-by-point centroid insertion algorithm. Then the fractures are exactly recovered by local correction with revised dynamic grid deformation approach. Smoothing algorithm is finally applied to improve the quality of mesh. The proposed approach is efficient, easy to implement, and applicable to the cases of initial existing fractures and extension of fractures. The method is successfully applied to modeling of two- and three-dimensional discrete fractured network (DFN system in geological problems to demonstrate its effectiveness and high efficiency.

  6. Optimisation of a direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores.

    Science.gov (United States)

    Henczka, Marek; Djas, Małgorzata; Filipek, Katarzyna

    2013-01-01

    A direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores has been optimised. The results of the application of four types of growth media (BAT agar, YSG agar, K agar and SK agar) regarding the recovery and enumeration of A. acidoterrestris spores were compared. The influence of the type of applied growth medium, heat shock conditions, incubation temperature, incubation time, plating technique and the presence of apple juice in the sample on the accuracy of the detection and enumeration of A. acidoterrestris spores was investigated. Among the investigated media, YSG agar was the most sensitive medium, and its application resulted in the highest recovery of A. acidoterrestris spores, while K agar and BAT agar were the least suitable media. The effect of the heat shock time on the recovery of spores was negligible. When there was a low concentration of spores in a sample, the membrane filtration method was superior to the spread plating method. The obtained results show that heat shock carried out at 80°C for 10 min and plating samples in combination with membrane filtration on YSG agar, followed by incubation at 46°C for 3 days provided the optimal conditions for the detection and enumeration of A. acidoterrestris spores. Application of the presented method allows highly efficient, fast and sensitive identification and enumeration of A. acidoterrestris spores in food products. This methodology will be useful for the fruit juice industry for identifying products contaminated with A. acidoterrestris spores, and its practical application may prevent economic losses for manufacturers. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Simulation optimisation

    International Nuclear Information System (INIS)

    Anon

    2010-01-01

    Over the past decade there has been a significant advance in flotation circuit optimisation through performance benchmarking using metallurgical modelling and steady-state computer simulation. This benchmarking includes traditional measures, such as grade and recovery, as well as new flotation measures, such as ore floatability, bubble surface area flux and froth recovery. To further this optimisation, Outotec has released its HSC Chemistry software with simulation modules. The flotation model developed by the AMIRA P9 Project, of which Outotec is a sponsor, is regarded by industry as the most suitable flotation model to use for circuit optimisation. This model incorporates ore floatability with flotation cell pulp and froth parameters, residence time, entrainment and water recovery. Outotec's HSC Sim enables you to simulate mineral processes in different levels, from comminution circuits with sizes and no composition, through to flotation processes with minerals by size by floatability components, to full processes with true particles with MLA data.

  8. Optimised operation of an off-grid hybrid wind-diesel-battery system using genetic algorithm

    International Nuclear Information System (INIS)

    Gan, Leong Kit; Shek, Jonathan K.H.; Mueller, Markus A.

    2016-01-01

    Highlights: • Diesel generator’s operation is optimised in a hybrid wind-diesel-battery system. • Optimisation is performed using wind speed and load demand forecasts. • The objective is to maximise wind energy utilisation with limited battery storage. • Physical modelling approach (Simscape) is used to verify mathematical model. • Sensitivity analyses are performed with synthesised wind and load forecast errors. - Abstract: In an off-grid hybrid wind-diesel-battery system, the diesel generator is often not utilised efficiently, therefore compromising its lifetime. In particular, the general rule of thumb of running the diesel generator at more than 40% of its rated capacity is often unmet. This is due to the variation in power demand and wind speed which needs to be supplied by the diesel generator. In addition, the frequent start-stop of the diesel generator leads to additional mechanical wear and fuel wastage. This research paper proposes a novel control algorithm which optimises the operation of a diesel generator, using genetic algorithm. With a given day-ahead forecast of local renewable energy resource and load demand, it is possible to optimise the operation of a diesel generator, subjected to other pre-defined constraints. Thus, the utilisation of the renewable energy sources to supply electricity can be maximised. Usually, the optimisation studies of a hybrid system are being conducted through simple analytical modelling, coupled with a selected optimisation algorithm to seek the optimised solution. The obtained solution is not verified using a more realistic system model, for instance the physical modelling approach. This often led to the question of the applicability of such optimised operation being used in reality. In order to take a step further, model-based design using Simulink is employed in this research to perform a comparison through a physical modelling approach. The Simulink model has the capability to incorporate the electrical

  9. Optimisation of key performance measures in air cargo demand management

    OpenAIRE

    Alexander May; Adrian Anslow; Udechukwu Ojiako; Yue Wu; Alasdair Marshall; Maxwell Chipulu

    2014-01-01

    This article sought to facilitate the optimisation of key performance measures utilised for demand management in air cargo operations. The focus was on the Revenue Management team at Virgin Atlantic Cargo and a fuzzy group decision-making method was used. Utilising intelligent fuzzy multi-criteria methods, the authors generated a ranking order of ten key outcome-based performance indicators for Virgin Atlantic air cargo Revenue Management. The result of this industry-driven study showed that ...

  10. An optimised portfolio management model, incorporating best practices

    OpenAIRE

    2015-01-01

    M.Ing. (Engineering Management) Driving sustainability, optimising return on investments and cultivating a competitive market advantage, are imperative for organisational success and growth. In order to achieve the business objectives and value proposition, effective management strategies must be efficiently implemented, monitored and controlled. Failure to do so ultimately result in; financial loss due to increased capital and operational expenditure, schedule slippages, substandard deliv...

  11. CREATIV: Research-based innovation for industry energy efficiency

    International Nuclear Information System (INIS)

    Tangen, Grethe; Hemmingsen, Anne Karin T.; Neksa, Petter

    2011-01-01

    Improved energy efficiency is imperative to minimise the greenhouse gas emissions and to ensure future energy security. It is also a key to continued profitability in energy consuming industry. The project CREATIV is a research initiative for industry energy efficiency focusing on utilisation of surplus heat and efficient heating and cooling. In CREATIV, international research groups work together with key vendors of energy efficiency equipment and an industry consortium including the areas metallurgy, pulp and paper, food and fishery, and commercial refrigeration supermarkets. The ambition of CREATIV is to bring forward technology and solutions enabling Norway to reduce both energy consumption and greenhouse gas emissions by 25% within 2020. The main research topics are electricity production from low temperature heat sources in supercritical CO 2 cycles, energy efficient end-user technology for heating and cooling based on natural working fluids and system optimisation, and efficient utilisation of low temperature heat by developing new sorption systems and compact compressor-expander units. A defined innovation strategy in the project will ensure exploitation of research results and promote implementation in industry processes. CREATIV will contribute to the recruitment of competent personnel to industry and academia by educating PhD and post doc candidates and several MSc students. The paper presents the CREATIV project, discusses its scientific achievements so far, and outlines how the project results can contribute to reducing industry energy consumption. - Highlights: → New technology for improved energy efficiency relevant across several industries. → Surplus heat exploitation and efficient heating and cooling are important means. → Focus on power production from low temperature heat and heat pumping technologies. → Education and competence building are given priority. → The project consortium includes 20 international industry companies and

  12. Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.

    Science.gov (United States)

    Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P

    2017-01-01

    To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.

  13. Energy Savings from Optimised In-Field Route Planning for Agricultural Machinery

    Directory of Open Access Journals (Sweden)

    Efthymios Rodias

    2017-10-01

    Full Text Available Various types of sensors technologies, such as machine vision and global positioning system (GPS have been implemented in navigation of agricultural vehicles. Automated navigation systems have proved the potential for the execution of optimised route plans for field area coverage. This paper presents an assessment of the reduction of the energy requirements derived from the implementation of optimised field area coverage planning. The assessment regards the analysis of the energy requirements and the comparison between the non-optimised and optimised plans for field area coverage in the whole sequence of operations required in two different cropping systems: Miscanthus and Switchgrass production. An algorithmic approach for the simulation of the executed field operations by following both non-optimised and optimised field-work patterns was developed. As a result, the corresponding time requirements were estimated as the basis of the subsequent energy cost analysis. Based on the results, the optimised routes reduce the fuel energy consumption up to 8%, the embodied energy consumption up to 7%, and the total energy consumption from 3% up to 8%.

  14. Operating conditions of an open and direct solar thermal Brayton cycle with optimised cavity receiver and recuperator

    International Nuclear Information System (INIS)

    Le Roux, W.G.; Bello-Ochende, T.; Meyer, J.P.

    2011-01-01

    The small-scale open and direct solar thermal Brayton cycle with recuperator has several advantages, including low cost, low operation and maintenance costs and it is highly recommended. The main disadvantages of this cycle are the pressure losses in the recuperator and receiver, turbomachine efficiencies and recuperator effectiveness, which limit the net power output of such a system. The irreversibilities of the solar thermal Brayton cycle are mainly due to heat transfer across a finite temperature difference and fluid friction. In this paper, thermodynamic optimisation is applied to concentrate on these disadvantages in order to optimise the receiver and recuperator and to maximise the net power output of the system at various steady-state conditions, limited to various constraints. The effects of wind, receiver inclination, rim angle, atmospheric temperature and pressure, recuperator height, solar irradiance and concentration ratio on the optimum geometries and performance were investigated. The dynamic trajectory optimisation method was applied. Operating points of a standard micro-turbine operating at its highest compressor efficiency and a parabolic dish concentrator diameter of 16 m were considered. The optimum geometries, minimum irreversibility rates and maximum receiver surface temperatures of the optimised systems are shown. For an environment with specific conditions and constraints, there exists an optimum receiver and recuperator geometry so that the system produces maximum net power output. -- Highlights: → Optimum geometries exist such that the system produces maximum net power output. → Optimum operating conditions are shown. → Minimum irreversibility rates and minimum entropy generation rates are shown. → Net power output was described in terms of total entropy generation rate. → Effects such as wind, recuperator height and irradiance were investigated.

  15. Plant-wide dynamic and static optimisation of supermarket refrigeration systems

    DEFF Research Database (Denmark)

    Green, Torben; Izadi-Zamanabadi, Roozbeh; Razavi-Far, Roozbeh

    2013-01-01

    Optimising the operation of a supermarket refrigeration system under dynamic as well as steadystate conditions is addressedin thispaper. For thispurpose anappropriateperformance function that encompasses food quality, system efficiency, and also component reliability is established. The choice...... in the system. Simulation results is used to substantiate the suggestedmethodology....

  16. Optimisation of Marine Boilers using Model-based Multivariable Control

    DEFF Research Database (Denmark)

    Solberg, Brian

    Traditionally, marine boilers have been controlled using classical single loop controllers. To optimise marine boiler performance, reduce new installation time and minimise the physical dimensions of these large steel constructions, a more comprehensive and coherent control strategy is needed....... This research deals with the application of advanced control to a specific class of marine boilers combining well-known design methods for multivariable systems. This thesis presents contributions for modelling and control of the one-pass smoke tube marine boilers as well as for hybrid systems control. Much...... of the focus has been directed towards water level control which is complicated by the nature of the disturbances acting on the system as well as by low frequency sensor noise. This focus was motivated by an estimated large potential to minimise the boiler geometry by reducing water level fluctuations...

  17. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  18. Optimisation of amplitude distribution of magnetic Barkhausen noise

    Science.gov (United States)

    Pal'a, Jozef; Jančárik, Vladimír

    2017-09-01

    The magnetic Barkhausen noise (MBN) measurement method is a widely used non-destructive evaluation technique used for inspection of ferromagnetic materials. Besides other influences, the excitation yoke lift-off is a significant issue of this method deteriorating the measurement accuracy. In this paper, the lift-off effect is analysed mainly on grain oriented Fe-3%Si steel subjected to various heat treatment conditions. Based on investigation of relationship between the amplitude distribution of MBN and lift-off, an approach to suppress the lift-off effect is proposed. Proposed approach utilizes the digital feedback optimising the measurement based on the amplitude distribution of MBN. The results demonstrated that the approach can highly suppress the lift-off effect up to 2 mm.

  19. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  20. Optimisation of active suspension control inputs for improved performance of active safety systems

    Science.gov (United States)

    Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor

    2018-01-01

    A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.

  1. Rotational degree-of-freedom synthesis: An optimised finite difference method for non-exact data

    Science.gov (United States)

    Gibbons, T. J.; Öztürk, E.; Sims, N. D.

    2018-01-01

    Measuring the rotational dynamic behaviour of a structure is important for many areas of dynamics such as passive vibration control, acoustics, and model updating. Specialist and dedicated equipment is often needed, unless the rotational degree-of-freedom is synthesised based upon translational data. However, this involves numerically differentiating the translational mode shapes to approximate the rotational modes, for example using a finite difference algorithm. A key challenge with this approach is choosing the measurement spacing between the data points, an issue which has often been overlooked in the published literature. The present contribution will for the first time prove that the use of a finite difference approach can be unstable when using non-exact measured data and a small measurement spacing, for beam-like structures. Then, a generalised analytical error analysis is used to propose an optimised measurement spacing, which balances the numerical error of the finite difference equation with the propagation error from the perturbed data. The approach is demonstrated using both numerical and experimental investigations. It is shown that by obtaining a small number of test measurements it is possible to optimise the measurement accuracy, without any further assumptions on the boundary conditions of the structure.

  2. A simple and efficient method for assembling TALE protein based on plasmid library.

    Science.gov (United States)

    Zhang, Zhiqiang; Li, Duo; Xu, Huarong; Xin, Ying; Zhang, Tingting; Ma, Lixia; Wang, Xin; Chen, Zhilong; Zhang, Zhiying

    2013-01-01

    DNA binding domain of the transcription activator-like effectors (TALEs) from Xanthomonas sp. consists of tandem repeats that can be rearranged according to a simple cipher to target new DNA sequences with high DNA-binding specificity. This technology has been successfully applied in varieties of species for genome engineering. However, assembling long TALE tandem repeats remains a big challenge precluding wide use of this technology. Although several new methodologies for efficiently assembling TALE repeats have been recently reported, all of them require either sophisticated facilities or skilled technicians to carry them out. Here, we described a simple and efficient method for generating customized TALE nucleases (TALENs) and TALE transcription factors (TALE-TFs) based on TALE repeat tetramer library. A tetramer library consisting of 256 tetramers covers all possible combinations of 4 base pairs. A set of unique primers was designed for amplification of these tetramers. PCR products were assembled by one step of digestion/ligation reaction. 12 TALE constructs including 4 TALEN pairs targeted to mouse Gt(ROSA)26Sor gene and mouse Mstn gene sequences as well as 4 TALE-TF constructs targeted to mouse Oct4, c-Myc, Klf4 and Sox2 gene promoter sequences were generated by using our method. The construction routines took 3 days and parallel constructions were available. The rate of positive clones during colony PCR verification was 64% on average. Sequencing results suggested that all TALE constructs were performed with high successful rate. This is a rapid and cost-efficient method using the most common enzymes and facilities with a high success rate.

  3. Mesh dependence in PDE-constrained optimisation an application in tidal turbine array layouts

    CERN Document Server

    Schwedes, Tobias; Funke, Simon W; Piggott, Matthew D

    2017-01-01

    This book provides an introduction to PDE-constrained optimisation using finite elements and the adjoint approach. The practical impact of the mathematical insights presented here are demonstrated using the realistic scenario of the optimal placement of marine power turbines, thereby illustrating the real-world relevance of best-practice Hilbert space aware approaches to PDE-constrained optimisation problems. Many optimisation problems that arise in a real-world context are constrained by partial differential equations (PDEs). That is, the system whose configuration is to be optimised follows physical laws given by PDEs. This book describes general Hilbert space formulations of optimisation algorithms, thereby facilitating optimisations whose controls are functions of space. It demonstrates the importance of methods that respect the Hilbert space structure of the problem by analysing the mathematical drawbacks of failing to do so. The approaches considered are illustrated using the optimisation problem arisin...

  4. Experimental design-based isotope-dilution SPME-GC/MS method development for the analysis of smoke flavouring products.

    Science.gov (United States)

    Giri, Anupam; Zelinkova, Zuzana; Wenzl, Thomas

    2017-12-01

    For the implementation of Regulation (EC) No 2065/2003 related to smoke flavourings used or intended for use in or on foods a method based on solid-phase micro extraction (SPME) GC/MS was developed for the characterisation of liquid smoke products. A statistically based experimental design (DoE) was used for method optimisation. The best general conditions to quantitatively analyse the liquid smoke compounds were obtained with a polydimethylsiloxane/divinylbenzene (PDMS/DVB) fibre, 60°C extraction temperature, 30 min extraction time, 250°C desorption temperature, 180 s desorption time, 15 s agitation time, and 250 rpm agitation speed. Under the optimised conditions, 119 wood pyrolysis products including furan/pyran derivatives, phenols, guaiacol, syringol, benzenediol, and their derivatives, cyclic ketones, and several other heterocyclic compounds were identified. The proposed method was repeatable (RSD% <5) and the calibration functions were linear for all compounds under study. Nine isotopically labelled internal standards were used for improving quantification of analytes by compensating matrix effects that might affect headspace equilibrium and extractability of compounds. The optimised isotope dilution SPME-GC/MS based analytical method proved to be fit for purpose, allowing the rapid identification and quantification of volatile compounds in liquid smoke flavourings.

  5. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  6. Pareto Efficient Solutions of Attack-Defence Trees

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Nielson, Flemming

    2015-01-01

    Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes, such as proba......Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes......, such as probability or cost of attacks and defences. In case of multiple parameters most analytical methods optimise one parameter at a time, e.g., minimise cost or maximise probability of an attack. Such methods may lead to sub-optimal solutions when optimising conflicting parameters, e.g., minimising cost while...... maximising probability. In order to tackle this challenge, we devise automated techniques that optimise all parameters at once. Moreover, in the case of conflicting parameters our techniques compute the set of all optimal solutions, defined in terms of Pareto efficiency. The developments are carried out...

  7. Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.

    Science.gov (United States)

    Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N

    2012-10-01

    The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.

  8. Energy efficient maintenance. Project report; Energioptimerende vedligehold. Projektrapport

    Energy Technology Data Exchange (ETDEWEB)

    Bjerg, J. (Center for Drift og Vedligehold, Frederici (Denmark)); Dam Wied, M.; Skjershede Nielsen, P.; Holt, J. (NRGi Raadgivning A/S, Aarhus (Denmark)); Dam, M. (Energi Horsens, Horsens (Denmark)); Holk Lauridsen, V. (Teknologisk Institut, Energieffektivisering og Ventilation, Taastrup (Denmark))

    2010-03-15

    Together with four case companies, the project developed and tested a model for energy-efficient maintenance. In each of the companies, the model was adjusted through a cooperation process aiming at combining energy optimisation and maintenance as part of specific production optimisation. When correctly planned, energy-efficient maintenance is interesting for all companies. An overall solution was made, which can facilitate major energy savings and production efficiency improvement. (LN)

  9. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    Science.gov (United States)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  10. Topology Optimisation of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Thike Aye Min

    2016-01-01

    Full Text Available Wireless sensor networks are widely used in a variety of fields including industrial environments. In case of a clustered network the location of cluster head affects the reliability of the network operation. Finding of the optimum location of the cluster head, therefore, is critical for the design of a network. This paper discusses the optimisation approach, based on the brute force algorithm, in the context of topology optimisation of a cluster structure centralised wireless sensor network. Two examples are given to verify the approach that demonstrate the implementation of the brute force algorithm to find an optimum location of the cluster head.

  11. Field—Based Supercritical Fluid Extraction of Hydrocarbons at Industrially Contaminated Sites

    Directory of Open Access Journals (Sweden)

    Peggy Rigou

    2002-01-01

    Full Text Available Examination of organic pollutants in groundwaters should also consider the source of the pollution, which is often a solid matrix such as soil, landfill waste, or sediment. This premise should be viewed alongside the growing trend towards field-based characterisation of contaminated sites for reasons of speed and cost. Field-based methods for the extraction of organic compounds from solid samples are generally cumbersome, time consuming, or inefficient. This paper describes the development of a field-based supercritical fluid extraction (SFE system for the recovery of organic contaminants (benzene, toluene, ethylbenzene, and xylene and polynuclear aromatic hydrocarbons from soils. A simple, compact, and robust SFE system has been constructed and was found to offer the same extraction efficiency as a well-established laboratory SFE system. Extraction optimisation was statistically evaluated using a factorial analysis procedure. Under optimised conditions, the device yielded recovery efficiencies of >70% with RSD values of 4% against the standard EPA Soxhlet method, compared with a mean recovery efficiency of 48% for a commercially available field-extraction kit. The device will next be evaluated with real samples prior to field deployment.

  12. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  13. Optimisation of phenolic extraction from Averrhoa carambola pomace by response surface methodology and its microencapsulation by spray and freeze drying.

    Science.gov (United States)

    Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata

    2015-03-15

    Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. An Energy Efficiency Evaluation Method Based on Energy Baseline for Chemical Industry

    Directory of Open Access Journals (Sweden)

    Dong-mei Yao

    2016-01-01

    Full Text Available According to the requirements and structure of ISO 50001 energy management system, this study proposes an energy efficiency evaluation method based on energy baseline for chemical industry. Using this method, the energy plan implementation effect in the processes of chemical production can be evaluated quantitatively, and evidences for system fault diagnosis can be provided. This method establishes the energy baseline models which can meet the demand of the different kinds of production processes and gives the general solving method of each kind of model according to the production data. Then the energy plan implementation effect can be evaluated and also whether the system is running normally can be determined through the baseline model. Finally, this method is used on cracked gas compressor unit of ethylene plant in some petrochemical enterprise; it can be proven that this method is correct and practical.

  15. A study of certain Monte Carlo search and optimisation methods

    International Nuclear Information System (INIS)

    Budd, C.

    1984-11-01

    Studies are described which might lead to the development of a search and optimisation facility for the Monte Carlo criticality code MONK. The facility envisaged could be used to maximise a function of k-effective with respect to certain parameters of the system or, alternatively, to find the system (in a given range of systems) for which that function takes a given value. (UK)

  16. Methodological principles for optimising functional MRI experiments; Methodische Grundlagen der Optimierung funktioneller MR-Experimente

    Energy Technology Data Exchange (ETDEWEB)

    Wuestenberg, T. [Georg-August-Universitaet Goettingen, Abteilung fuer Medizinische Psychologie (Germany); Georg-August-Universitaet, Abteilung fuer Medizinische Psychologie, Goettingen (Germany); Giesel, F.L. [Deutsches Kebsforschungszentrum (DKFZ) Heidelberg, Abteilung fuer Radiologische Diagnostik (Germany); Strasburger, H. [Georg-August-Universitaet Goettingen, Abteilung fuer Medizinische Psychologie (Germany)

    2005-02-01

    Functional magnetic resonance imaging (fMRI) is one of the most common methods for localising neuronal activity in the brain. Even though the sensitivity of fMRI is comparatively low, the optimisation of certain experimental parameters allows obtaining reliable results. In this article, approaches for optimising the experimental design, imaging parameters and analytic strategies will be discussed. Clinical neuroscientists and interested physicians will receive practical rules of thumb for improving the efficiency of brain imaging experiments. (orig.) [German] Die funktionelle Magnetresonanztomographie (fMRT) des Zentralnervensystems ist eine der meistgenutzten Methoden zur Lokalisierung neuronaler Aktivitaet im Gehirn. Obwohl die Sensitivitaet der fMRT vergleichsweise gering ist, kann durch die Auswahl geeigneter experimenteller Parameter die Empfindlichkeit dieses bildgebenden Verfahrens gesteigert und die Reliabilitaet der Ergebnisse gewaehrleistet werden. In diesem Artikel werden deshalb Ansaetze fuer die Optimierung des Paradigmendesigns, der MR-Bildgebung und der Datenauswertung diskutiert. Klinischen Forschern und interessierten Aerzten sollen dadurch Richtgroessen fuer die Durchfuehrung effektiver fMRT-Experimente vermittelt werden. (orig.)

  17. Strategies and Methods for Optimisation of Protection against Internal Exposures of Workers from Industrial Natural Sources (SMOPIE)

    International Nuclear Information System (INIS)

    Van der Steen, J.; Timmermans, C.W.M.; Van Weers, A.W.; Degrange, J.P.; Lefaure, C.; Shaw, P.V.

    2004-01-01

    The report provides summaries on the Work Packages 1 and 2 (see Annex 1 and 2 below) and describes the work carried out in Work Packages 3, 4 and 5. In addition it provides a summary of the main achievements of the project. The objective of Work Package 3 was to try to categorise exposure situations described in the case studies in terms of a limited number of exposure parameters relevant to the implementation of ALARA. It became clear that the characterisation criteria considered for the many different exposure situations in the industrial cases led to an important practical conclusion, namely that the preferred choice of the air sampling method (i.e. to implement ALARA) will be the same in all the industries considered. The aim of work package 4 (Review and evaluation of monitoring strategies and methods) was to review the technical capabilities and limitations of different forms of internal radiation monitoring. This included a consideration of monitoring strategies, methods and equipment, as appropriate. The review considered which types of monitoring (if any) are the most effective in terms of contributing to the optimisation of internal exposures (from inhalation) and whether further developments are needed, especially in relation to existing monitoring equipment. One of the main conclusions is: personal air sampling (PAS) is the best method for assessing occupational doses from inhalation of aerosols. The first step in any monitoring strategy should be an assessment of worker doses using this technique. The Appendices 1-4 of Annex 3 provide the detailed supporting material for Work Package 4. Work Package 5 provides recommended strategies, methods and tools for optimisation of internal exposures in industrial work activities involving natural radionuclides. It is based on the case studies as described in Work Package 2 and the analysis of these studies in Work Package 3. It also takes into account the assessment of monitoring strategies, methods and tools

  18. Characterisation and optimisation of a sample preparation method for the detection and quantification of atmospherically relevant carbonyl compounds in aqueous medium

    Science.gov (United States)

    Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.

    2015-06-01

    Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds. Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). With the present method quantification can be carried out for each carbonyl compound originating from fog, cloud and rain or sampled from the gas- and particle phase in water. Detection limits between 0.01 and 0.17 μmol L-1 were found, depending on carbonyl compounds. Furthermore, best results were found for the derivatisation with a PFBHA concentration of 0.43 mg mL-1 for 24 h followed by a subsequent extraction with dichloromethane for 30 min at pH = 1. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione after a reaction time of 5 h.

  19. Automatic optimisation of beam orientations using the simplex algorithm and optimisation of quality control using statistical process control (S.P.C.) for intensity modulated radiation therapy (I.M.R.T.); Optimisation automatique des incidences des faisceaux par l'algorithme du simplexe et optimisation des controles qualite par la Maitrise Statistique des Processus (MSP) en Radiotherapie Conformationnelle par Modulation d'Intensite (RCMI)

    Energy Technology Data Exchange (ETDEWEB)

    Gerard, K

    2008-11-15

    Intensity Modulated Radiation Therapy (I.M.R.T.) is currently considered as a technique of choice to increase the local control of the tumour while reducing the dose to surrounding organs at risk. However, its routine clinical implementation is partially held back by the excessive amount of work required to prepare the patient treatment. In order to increase the efficiency of the treatment preparation, two axes of work have been defined. The first axis concerned the automatic optimisation of beam orientations. We integrated the simplex algorithm in the treatment planning system. Starting from the dosimetric objectives set by the user, it can automatically determine the optimal beam orientations that best cover the target volume while sparing organs at risk. In addition to time sparing, the simplex results of three patients with a cancer of the oropharynx, showed that the quality of the plan is also increased compared to a manual beam selection. Indeed, for an equivalent or even a better target coverage, it reduces the dose received by the organs at risk. The second axis of work concerned the optimisation of pre-treatment quality control. We used an industrial method: Statistical Process Control (S.P.C.) to retrospectively analyse the absolute dose quality control results performed using an ionisation chamber at Centre Alexis Vautrin (C.A.V.). This study showed that S.P.C. is an efficient method to reinforce treatment security using control charts. It also showed that our dose delivery process was stable and statistically capable for prostate treatments, which implies that a reduction of the number of controls can be considered for this type of treatment at the C.A.V.. (author)

  20. Statistical optimisation techniques in fatigue signal editing problem

    International Nuclear Information System (INIS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-01-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection

  1. Statistical optimisation techniques in fatigue signal editing problem

    Energy Technology Data Exchange (ETDEWEB)

    Nopiah, Z. M.; Osman, M. H. [Fundamental Engineering Studies Unit Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia); Baharin, N.; Abdullah, S. [Department of Mechanical and Materials Engineering Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia)

    2015-02-03

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  2. Using Field Data for Energy Efficiency Based on Maintenance and Operational Optimisation. A Step towards PHM in Process Plants

    Directory of Open Access Journals (Sweden)

    Micaela Demichela

    2018-03-01

    Full Text Available Energy saving is an important issue for any industrial sector; in particular, for the process industry, it can help to minimize both energy costs and environmental impact. Maintenance optimization and operational procedures can offer margins to increase energy efficiency in process plants, even if they are seldom explicitly taken into account in the predictive models guiding the energy saving policies. To ensure that the plant achieves the desired performance, maintenance operations and maintenance results should be monitored, and the connection between the inputs and the outcomes of the maintenance process, in terms of total contribution to manufacturing performance, should be explicit. In this study, a model for the energy efficiency analysis was developed, based on cost and benefits balance. It is aimed at supporting the decision making in terms of technical and operational solutions for energy efficiency, through the optimization of maintenance interventions and operational procedures. A case study is here described: the effects on energy efficiency of technical and operational optimization measures for bituminous materials production process equipment. The idea of the Conservation Supply Curve (CSC was used to capture both the cost effectiveness of the measures and the energy efficiency effectiveness. The optimization was thus based on the energy consumption data registered on-site: data collection and modelling of the relevant data were used as a base to implement a prognostic and health management (PHM policy in the company. Based on the results from the analysis, efficiency measures for the industrial case study were proposed, also in relation to maintenance optimization and operating procedures. In the end, the impacts of the implementation of energy saving measures on the performance of the system, in terms of technical and economic feasibility, were demonstrated. The results showed that maintenance optimization could help in reaching

  3. Work management to optimise occupational radiological protection

    International Nuclear Information System (INIS)

    Ahier, B.

    2009-01-01

    Although work management is no longer a new concept, continued efforts are still needed to ensure that good performance, outcomes and trends are maintained in the face of current and future challenges. The ISOE programme thus created an Expert Group on Work Management in 2007 to develop an updated report reflecting the current state of knowledge, technology and experience in the occupational radiological protection of workers at nuclear power plants. Published in 2009, the new ISOE report on Work Management to Optimise Occupational Radiological Protection in the Nuclear Power Industry provides up-to-date practical guidance on the application of work management principles. Work management measures aim at optimising occupational radiological protection in the context of the economic viability of the installation. Important factors in this respect are measures and techniques influencing i) dose and dose rate, including source- term reduction; ii) exposure, including amount of time spent in controlled areas for operations; and iii) efficiency in short- and long-term planning, worker involvement, coordination and training. Equally important due to their broad, cross-cutting nature are the motivational and organisational arrangements adopted. The responsibility for these aspects may reside in various parts of an installation's organisational structure, and thus, a multi-disciplinary approach must be recognised, accounted for and well-integrated in any work. Based on the operational experience within the ISOE programme, the following key areas of work management have been identified: - regulatory aspects; - ALARA management policy; - worker involvement and performance; - work planning and scheduling; - work preparation; - work implementation; - work assessment and feedback; - ensuring continuous improvement. The details of each of these areas are elaborated and illustrated in the report through examples and case studies arising from ISOE experience. They are intended to

  4. Optimisation of tungsten ore processing through a deep mineralogical characterisation and the study of the crushing process

    OpenAIRE

    Bascompte Vaquero, Jordi

    2017-01-01

    The unstoppable increasing global demand for metals calls for an urgent development of more efficient extraction and processing methods in the mining industry. Comminution is responsible for nearly half of the energy consumption of the entire mining process, and in the majority of the cases it is far from being optimised. Inside comminution, grinding is widely known for being more inefficient than crushing, however, it is needed to reach liberation at an ultrafine particle size. ...

  5. Water quality modelling and optimisation of wastewater treatment network using mixed integer programming

    CSIR Research Space (South Africa)

    Mahlathi, Christopher

    2016-10-01

    Full Text Available Instream water quality management encompasses field monitoring and utilisation of mathematical models. These models can be coupled with optimisation techniques to determine more efficient water quality management alternatives. Among these activities...

  6. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  7. Calibration of a single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2015-01-21

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  8. TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method

    International Nuclear Information System (INIS)

    Han, X

    2016-01-01

    Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When a new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.

  9. Optimisation of a propagation-based x-ray phase-contrast micro-CT system

    Science.gov (United States)

    Nesterets, Yakov I.; Gureyev, Timur E.; Dimmock, Matthew R.

    2018-03-01

    Micro-CT scanners find applications in many areas ranging from biomedical research to material sciences. In order to provide spatial resolution on a micron scale, these scanners are usually equipped with micro-focus, low-power x-ray sources and hence require long scanning times to produce high resolution 3D images of the object with acceptable contrast-to-noise. Propagation-based phase-contrast tomography (PB-PCT) has the potential to significantly improve the contrast-to-noise ratio (CNR) or, alternatively, reduce the image acquisition time while preserving the CNR and the spatial resolution. We propose a general approach for the optimisation of the PB-PCT imaging system. When applied to an imaging system with fixed parameters of the source and detector this approach requires optimisation of only two independent geometrical parameters of the imaging system, i.e. the source-to-object distance R 1 and geometrical magnification M, in order to produce the best spatial resolution and CNR. If, in addition to R 1 and M, the system parameter space also includes the source size and the anode potential this approach allows one to find a unique configuration of the imaging system that produces the required spatial resolution and the best CNR.

  10. Optimisation Platform for copper ore processing at the Division of Concentrator of KGHM Polska Miedz S.A.

    Directory of Open Access Journals (Sweden)

    Kuzba Bogdan

    2016-01-01

    Full Text Available The idea of Optimisation Platform is to create an innovative system. It is dedicated to technology and cost efficiency improvement of process realized at the Division of Concentrators of KGHM Polska Miedz SA. This highly sophisticated tool is based on visual, acoustic and vibrating detection systems. The range of its functionality was described in this work. Three main utility modules were described: froth flotation image processing (FloVis, grinding and classification monitoring (MillVis and belt conveyors control unit (ConVis. The effects of implementation of the system under KGHM conditions were described. It is concluded that the Optimisation Platform is one of the most promising solution for improvement of technology and economy performance at the Division of Concentrators of KGHM Polska Miedz S.A.

  11. Hybrid real-code ant colony optimisation for constrained mechanical design

    Science.gov (United States)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  12. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    Science.gov (United States)

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  13. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    Directory of Open Access Journals (Sweden)

    Vito Trianni

    Full Text Available The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled. However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  14. Research on Duct Flow Field Optimisation of a Robot Vacuum Cleaner

    Directory of Open Access Journals (Sweden)

    Xiao-bo Lai

    2011-11-01

    Full Text Available The duct of a robot vacuum cleaner is the length of the flow channel between the inlet of the rolling brush blower and the outlet of the vacuum blower. To cope with the pressure drop problem of the duct flow field in a robot vacuum cleaner, a method based on Pressure Implicit with Splitting of Operators (PRISO algorithm is introduced and the optimisation design of the duct flow field is implemented. Firstly, the duct structure in a robot vacuum cleaner is taken as a research object, with the computational fluid dynamics (CFD theories adopted; a three-dimensional fluid model of the duct is established by means of the FLUENT solver of the CFD software. Secondly, with the k-∊ turbulence model of three-dimensional incompressible fluid considered and the PRISO pressure modification algorithm employed, the flow field numerical simulations inside the duct of the robot vacuum cleaner are carried out. Then, the velocity vector plots on the arbitrary plane of the duct flow field are obtained. Finally, an investigation of the dynamic characteristics of the duct flow field is done and defects of the original duct flow field are analysed, the optimisation of the original flow field has then been conducted. Experimental results show that the duct flow field after optimisation can effectively reduce pressure drop, the feasibility as well as the correctness of the theoretical modelling and optimisation approaches are validated.

  15. Research on Duct Flow Field Optimisation of a Robot Vacuum Cleaner

    Directory of Open Access Journals (Sweden)

    Xiao-bo Lai

    2011-11-01

    Full Text Available The duct of a robot vacuum cleaner is the length of the flow channel between the inlet of the rolling brush blower and the outlet of the vacuum blower. To cope with the pressure drop problem of the duct flow field in a robot vacuum cleaner, a method based on Pressure Implicit with Splitting of Operators (PRISO algorithm is introduced and the optimisation design of the duct flow field is implemented. Firstly, the duct structure in a robot vacuum cleaner is taken as a research object, with the computational fluid dynamics (CFD theories adopted; a three‐dimensional fluid model of the duct is established by means of the FLUENT solver of the CFD software. Secondly, with the k‐ε turbulence model of three‐ dimensional incompressible fluid considered and the PRISO pressure modification algorithm employed, the flow field numerical simulations inside the duct of the robot vacuum cleaner are carried out. Then, the velocity vector plots on the arbitrary plane of the duct flow field are obtained. Finally, an investigation of the dynamic characteristics of the duct flow field is done and defects of the original duct flow field are analysed, the optimisation of the original flow field has then been conducted. Experimental results show that the duct flow field after optimisation can effectively reduce pressure drop, the feasibility as well as the correctness of the theoretical modelling and optimisation approaches are validated.

  16. Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling

    International Nuclear Information System (INIS)

    Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.

    2003-01-01

    In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)

  17. Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems

    Science.gov (United States)

    Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.

    2015-05-01

    Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.

  18. Intelligent Data Storage and Retrieval for Design Optimisation – an Overview

    Directory of Open Access Journals (Sweden)

    C. Peebles

    2005-01-01

    Full Text Available This paper documents the findings of a literature review conducted by the Sir Lawrence Wackett Centre for Aerospace Design Technology at RMIT University. The review investigates aspects of a proposed system for intelligent design optimisation. Such a system would be capable of efficiently storing (and compressing if required a range of types of design data into an intelligent database. This database would be accessed by the system during subsequent design processes, allowing for search of relevant design data for re-use in later designs, allowing it to become very efficient in reducing the time for later designs as the database grows in size. Extensive research has been performed, in both theoretical aspects of the project, and practical examples of current similar systems. This research covers the areas of database systems, database queries, representation and compression of design data, geometric representation and heuristic methods for design applications. 

  19. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  20. MRI to X-ray mammography intensity-based registration with simultaneous optimisation of pose and biomechanical transformation parameters.

    Science.gov (United States)

    Mertzanidou, Thomy; Hipwell, John; Johnsen, Stian; Han, Lianghao; Eiben, Bjoern; Taylor, Zeike; Ourselin, Sebastien; Huisman, Henkjan; Mann, Ritse; Bick, Ulrich; Karssemeijer, Nico; Hawkes, David

    2014-05-01

    Determining corresponding regions between an MRI and an X-ray mammogram is a clinically useful task that is challenging for radiologists due to the large deformation that the breast undergoes between the two image acquisitions. In this work we propose an intensity-based image registration framework, where the biomechanical transformation model parameters and the rigid-body transformation parameters are optimised simultaneously. Patient-specific biomechanical modelling of the breast derived from diagnostic, prone MRI has been previously used for this task. However, the high computational time associated with breast compression simulation using commercial packages, did not allow the optimisation of both pose and FEM parameters in the same framework. We use a fast explicit Finite Element (FE) solver that runs on a graphics card, enabling the FEM-based transformation model to be fully integrated into the optimisation scheme. The transformation model has seven degrees of freedom, which include parameters for both the initial rigid-body pose of the breast prior to mammographic compression, and those of the biomechanical model. The framework was tested on ten clinical cases and the results were compared against an affine transformation model, previously proposed for the same task. The mean registration error was 11.6±3.8mm for the CC and 11±5.4mm for the MLO view registrations, indicating that this could be a useful clinical tool. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Topology optimised wavelength dependent splitters

    DEFF Research Database (Denmark)

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  2. Time varying acceleration coefficients particle swarm optimisation (TVACPSO): A new optimisation algorithm for estimating parameters of PV cells and modules

    International Nuclear Information System (INIS)

    Jordehi, Ahmad Rezaee

    2016-01-01

    Highlights: • A modified PSO has been proposed for parameter estimation of PV cells and modules. • In the proposed modified PSO, acceleration coefficients are changed during run. • The proposed modified PSO mitigates premature convergence problem. • Parameter estimation problem has been solved for both PV cells and PV modules. • The results show that proposed PSO outperforms other state of the art algorithms. - Abstract: Estimating circuit model parameters of PV cells/modules represents a challenging problem. PV cell/module parameter estimation problem is typically translated into an optimisation problem and is solved by metaheuristic optimisation problems. Particle swarm optimisation (PSO) is considered as a popular and well-established optimisation algorithm. Despite all its advantages, PSO suffers from premature convergence problem meaning that it may get trapped in local optima. Personal and social acceleration coefficients are two control parameters that, due to their effect on explorative and exploitative capabilities, play important roles in computational behavior of PSO. In this paper, in an attempt toward premature convergence mitigation in PSO, its personal acceleration coefficient is decreased during the course of run, while its social acceleration coefficient is increased. In this way, an appropriate tradeoff between explorative and exploitative capabilities of PSO is established during the course of run and premature convergence problem is significantly mitigated. The results vividly show that in parameter estimation of PV cells and modules, the proposed time varying acceleration coefficients PSO (TVACPSO) offers more accurate parameters than conventional PSO, teaching learning-based optimisation (TLBO) algorithm, imperialistic competitive algorithm (ICA), grey wolf optimisation (GWO), water cycle algorithm (WCA), pattern search (PS) and Newton algorithm. For validation of the proposed methodology, parameter estimation has been done both for

  3. Aspects of approximate optimisation: overcoming the curse of dimensionality and design of experiments

    NARCIS (Netherlands)

    Trichon, Sophie; Bonte, M.H.A.; Ponthot, Jean-Philippe; van den Boogaard, Antonius H.

    2007-01-01

    Coupling optimisation algorithms to Finite Element Methods (FEM) is a very promising way to achieve optimal metal forming processes. However, many optimisation algorithms exist and it is not clear which of these algorithms to use. This paper investigates the sensitivity of a Sequential Approximate

  4. Methodical approach to financial stimulation of logistics managers

    Directory of Open Access Journals (Sweden)

    Melnykova Kateryna V.

    2014-01-01

    Full Text Available The article offers a methodical approach to financial stimulation of logistics managers, which allows calculation of the incentive amount with consideration of profit obtained from introduction of optimisation logistics solutions. The author generalises measures, which would allow increase of stimulation of labour of logistics managers by the enterprise top managers. The article marks out motivation factors, which exert influence upon relation of logistics managers to execution of optimisation logistical solutions, which minimise logistical costs. The author builds a scale of financial encouragement for introduction of optimisation logistical solutions proposed by logistics managers. This scale is basic for functioning of the encouragement system and influences the increase of efficiency of logistics managers operation and also optimisation of enterprise logistical solutions.

  5. Ants Colony Optimisation of a Measuring Path of Prismatic Parts on a CMM

    Directory of Open Access Journals (Sweden)

    Stojadinovic Slavenko M.

    2016-03-01

    Full Text Available This paper presents optimisation of a measuring probe path in inspecting the prismatic parts on a CMM. The optimisation model is based on: (i the mathematical model that establishes an initial collision-free path presented by a set of points, and (ii the solution of Travelling Salesman Problem (TSP obtained with Ant Colony Optimisation (ACO. In order to solve TSP, an ACO algorithm that aims to find the shortest path of ant colony movement (i.e. the optimised path is applied. Then, the optimised path is compared with the measuring path obtained with online programming on CMM ZEISS UMM500 and with the measuring path obtained in the CMM inspection module of Pro/ENGINEER® software. The results of comparing the optimised path with the other two generated paths show that the optimised path is at least 20% shorter than the path obtained by on-line programming on CMM ZEISS UMM500, and at least 10% shorter than the path obtained by using the CMM module in Pro/ENGINEER®.

  6. Optimising Magnetostatic Assemblies

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Smith, Anders

    theorem. This theorem formulates an energy equivalence principle with several implications concerning the optimisation of objective functionals that are linear with respect to the magnetic field. Linear functionals represent different optimisation goals, e.g. maximising a certain component of the field...... approached employing a heuristic algorithm, which led to new design concepts. Some of the procedures developed for linear objective functionals have been extended to non-linear objectives, by employing iterative techniques. Even though most the optimality results discussed in this work have been derived...

  7. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  8. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  9. Optimisation of searches for Supersymmetry with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Zvolsky, Milan

    2012-01-15

    The ATLAS experiment is one of the four large experiments at the Large Hadron Collider which is specifically designed to search for the Higgs boson and physics beyond the Standard Model. The aim of this thesis is the optimisation of searches for Supersymmetry in decays with two leptons and missing transverse energy in the final state. Two different optimisation studies have been performed for two important analysis aspects: The final signal region selection and the choice of the trigger selection. In the first part of the analysis, a cut-based optimisation of signal regions is performed, maximising the signal for a minimal background contamination. By this, the signal yield can in parts be more than doubled. The second approach is to introduce di-lepton triggers which allow to lower the lepton transverse momentum threshold, thus enhancing the number of selected signal events significantly. The signal region optimisation was considered for the choice of the final event selection in the ATLAS di-lepton analyses. The trigger study contributed to the incorporation of di-lepton triggers to the ATLAS trigger menu. (orig.)

  10. Assessment concept for the building design process using the Eco-factor method

    DEFF Research Database (Denmark)

    Wahlström, Åsa; Brohus, Henrik

    2006-01-01

    During the last years the pressure for energy improvement has increased. However, a one-sided focus on energy efficiency might be introduced at the expense of indoor climate. Therefore, it is essential that energy optimisation is integrated with assessment of indoor climate. A guideline tool with...... with an assessment concept based on the so-called Eco-factor method been developed for an integrated design process....

  11. Dispersion-Flattened Composite Highly Nonlinear Fibre Optimised for Broadband Pulsed Four-Wave Mixing

    DEFF Research Database (Denmark)

    Lillieholm, Mads; Galili, Michael; Oxenløwe, Leif Katsuo

    2016-01-01

    We present a segmented composite HNLF optimised for mitigation of dispersion-fluctuation impairments for broadband pulsed four-wave mixing. The HNLF-segmentation allows for pulsed FWMprocessing of a 13-nm wide input WDM-signal with -4.6-dB conversion efficiency...

  12. Automatic optimisation of beam orientations using the simplex algorithm and optimisation of quality control using statistical process control (S.P.C.) for intensity modulated radiation therapy (I.M.R.T.)

    International Nuclear Information System (INIS)

    Gerard, K.

    2008-11-01

    Intensity Modulated Radiation Therapy (I.M.R.T.) is currently considered as a technique of choice to increase the local control of the tumour while reducing the dose to surrounding organs at risk. However, its routine clinical implementation is partially held back by the excessive amount of work required to prepare the patient treatment. In order to increase the efficiency of the treatment preparation, two axes of work have been defined. The first axis concerned the automatic optimisation of beam orientations. We integrated the simplex algorithm in the treatment planning system. Starting from the dosimetric objectives set by the user, it can automatically determine the optimal beam orientations that best cover the target volume while sparing organs at risk. In addition to time sparing, the simplex results of three patients with a cancer of the oropharynx, showed that the quality of the plan is also increased compared to a manual beam selection. Indeed, for an equivalent or even a better target coverage, it reduces the dose received by the organs at risk. The second axis of work concerned the optimisation of pre-treatment quality control. We used an industrial method: Statistical Process Control (S.P.C.) to retrospectively analyse the absolute dose quality control results performed using an ionisation chamber at Centre Alexis Vautrin (C.A.V.). This study showed that S.P.C. is an efficient method to reinforce treatment security using control charts. It also showed that our dose delivery process was stable and statistically capable for prostate treatments, which implies that a reduction of the number of controls can be considered for this type of treatment at the C.A.V.. (author)

  13. Microalgae based biorefinery: evaluation of oil extraction methods in terms of efficiency, costs, toxicity and energy in lab-scale

    Directory of Open Access Journals (Sweden)

    Ángel Darío González-Delgado

    2013-06-01

    Full Text Available Several alternatives of microalgal metabolites extraction and transformation are being studied for achieving the total utilization of this energy crop of great interest worldwide. Microalgae oil extraction is a key stage in microalgal biodiesel production chains and their efficiency affects significantly the global process efficiency. In this study, a comparison of five oil extraction methods in lab-scale was made taking as additional parameters, besides extraction efficiency, the costs of method performing, energy requirements, and toxicity of solvents used, in order to elucidate the convenience of their incorporation to a microalgae-based topology of biorefinery. Methods analyzed were Solvent extraction assisted with high speed homogenization (SHE, Continuous reflux solvent extraction (CSE, Hexane based extraction (HBE, Cyclohexane based extraction (CBE and Ethanol-hexane extraction (EHE, for this evaluation were used the microalgae strains Nannochloropsis sp., Guinardia sp., Closterium sp., Amphiprora sp. and Navicula sp., obtained from a Colombian microalgae bioprospecting. In addition, morphological response of strains to oil extraction methods was also evaluated by optic microscopy. Results shows that although there is not a unique oil extraction method which excels in all parameters evaluated, CSE, SHE and HBE appears as promising alternatives, while HBE method is shown as the more convenient for using in lab-scale and potentially scalable for implementation in a microalgae based biorefinery

  14. A novel sleep optimisation programme to improve athletes' well-being and performance.

    Science.gov (United States)

    Van Ryswyk, Emer; Weeks, Richard; Bandick, Laura; O'Keefe, Michaela; Vakulin, Andrew; Catcheside, Peter; Barger, Laura; Potter, Andrew; Poulos, Nick; Wallace, Jarryd; Antic, Nick A

    2017-03-01

    To improve well-being and performance indicators in a group of Australian Football League (AFL) players via a six-week sleep optimisation programme. Prospective intervention study following observations suggestive of reduced sleep and excessive daytime sleepiness in an AFL group. Athletes from the Adelaide Football Club were invited to participate if they had played AFL senior-level football for 1-5 years, or if they had excessive daytime sleepiness (Epworth Sleepiness Scale [ESS] >10), measured via ESS. An initial education session explained normal sleep needs, and how to achieve increased sleep duration and quality. Participants (n = 25) received ongoing feedback on their sleep, and a mid-programme education and feedback session. Sleep duration, quality and related outcomes were measured during week one and at the conclusion of the six-week intervention period using sleep diaries, actigraphy, ESS, Pittsburgh Sleep Quality Index, Profile of Mood States, Training Distress Scale, Perceived Stress Scale and the Psychomotor Vigilance Task. Sleep diaries demonstrated an increase in total sleep time of approximately 20 min (498.8 ± 53.8 to 518.7 ± 34.3; p sleep efficiency (p sleep efficiency, fatigue and vigour indicate that a sleep optimisation programme may improve athletes' well-being. More research is required into the effects of sleep optimisation on athletic performance.

  15. Optimisation in X-ray and Molecular Imaging 2015

    International Nuclear Information System (INIS)

    Baath, Magnus; Hoeschen, Christoph; Mattsson, Soeren; Mansson, Lars Gunnar

    2016-01-01

    This issue of Radiation Protection Dosimetry is based on contributions to Optimisation in X-ray and Molecular Imaging 2015 - the 4. Malmoe Conference on Medical Imaging (OXMI 2015). The conference was jointly organised by members of former and current research projects supported by the European Commission EURATOM Radiation Protection Research Programme, in cooperation with the Swedish Society for Radiation Physics. The conference brought together over 150 researchers and other professionals from hospitals, universities and industries with interests in different aspects of the optimisation of medical imaging. More than 100 presentations were given at this international gathering of medical physicists, radiologists, engineers, technicians, nurses and educational researchers. Additionally, invited talks were offered by world-renowned experts on radiation protection, spectral imaging and medical image perception, thus covering several important aspects of the generation and interpretation of medical images. The conference consisted of 13 oral sessions and a poster session, as reflected by the conference title connected by their focus on the optimisation of the use ionising radiation in medical imaging. The conference included technology-specific topics such as computed tomography and tomosynthesis, but also generic issues of interest for the optimisation of all medical imaging, such as image perception and quality assurance. Radiation protection was covered by e.g. sessions on patient dose benchmarking and occupational exposure. Technically-advanced topics such as modelling, Monte Carlo simulation, reconstruction, classification, and segmentation were seen taking advantage of recent developments of hardware and software, showing that the optimisation community is at the forefront of technology and adapts well to new requirements. These peer-reviewed proceedings, representing a continuation of a series of selected reports from meetings in the field of medical imaging

  16. Bus Access Optimisation for FlexRay-based Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Traian; Pop, Paul; Eles, Petru

    2007-01-01

    -real time communication in a deterministic manner. In this paper, we propose techniques for optimising the FlexRay bus access mechanism of a distributed system, so that the hard real-time deadlines are met for all the tasks and messages in the system. We have evaluated the proposed techniques using...

  17. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  18. MANAGEMENT OPTIMISATION OF MASS CUSTOMISATION MANUFACTURING USING COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Louwrens Butler

    2018-05-01

    Full Text Available Computational intelligence paradigms can be used for advanced manufacturing system optimisation. A static simulation model of an advanced manufacturing system was developed in order to simulate a manufacturing system. The purpose of this advanced manufacturing system was to mass-produce a customisable product range at a competitive cost. The aim of this study was to determine whether this new algorithm could produce a better performance than traditional optimisation methods. The algorithm produced a lower cost plan than that for a simulated annealing algorithm, and had a lower impact on the workforce.

  19. Efficient parsimony-based methods for phylogenetic network reconstruction.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-15

    Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.

  20. Optimisation models for decision support in the development of biomass-based industrial district-heating networks in Italy

    International Nuclear Information System (INIS)

    Chinese, Damiana; Meneghetti, Antonella

    2005-01-01

    A system optimisation approach is proposed to design biomass-based district-heating networks in the context of industrial districts, which are one of the main successful productive aspects of Italian industry. Two different perspectives are taken into account, that of utilities and of policy makers, leading to two optimisation models to be further integrated. A mixed integer linear-programming model is developed for a utility company's profit maximisation, while a linear-programming model aims at minimising the balance of greenhouse-gas emissions related to the proposed energy system and the avoided emissions due to the substitution of current fossil-fuel boilers with district-heating connections. To systematically compare their results, a sensitivity analysis is performed with respect to network size in order to identify how the optimal system configuration, in terms of selected boilers to be connected to a multiple energy-source network, may vary in the two cases and to detect possible optimal sizes. Then a factorial analysis is adopted to rank desirable client types under the two perspectives and identify proper marketing strategies. The proposed optimisation approach was applied to the design of a new district-heating network in the chair-manufacturing district of North-Eastern Italy. (Author)

  1. Sizing Combined Heat and Power Units and Domestic Building Energy Cost Optimisation

    OpenAIRE

    Dongmin Yu; Yuanzhu Meng; Gangui Yan; Gang Mu; Dezhi Li; Simon Le Blond

    2017-01-01

    Many combined heat and power (CHP) units have been installed in domestic buildings to increase energy efficiency and reduce energy costs. However, inappropriate sizing of a CHP may actually increase energy costs and reduce energy efficiency. Moreover, the high manufacturing cost of batteries makes batteries less affordable. Therefore, this paper will attempt to size the capacity of CHP and optimise daily energy costs for a domestic building with only CHP installed. In this paper, electricity ...

  2. User-friendly Tool for Power Flow Analysis and Distributed Generation Optimisation in Radial Distribution Networks

    Directory of Open Access Journals (Sweden)

    M. F. Akorede

    2017-06-01

    Full Text Available The intent of power distribution companies (DISCOs is to deliver electric power to their customers in an efficient and reliable manner – with minimal energy loss cost. One major way to minimise power loss on a given power system is to install distributed generation (DG units on the distribution networks. However, to maximise benefits, it is highly crucial for a DISCO to ensure that these DG units are of optimal size and sited in the best locations on the network. This paper gives an overview of a software package developed in this study, called Power System Analysis and DG Optimisation Tool (PFADOT. The main purpose of the graphical user interface-based package is to guide a DISCO in finding the optimal size and location for DG placement in radial distribution networks. The package, which is also suitable for load flow analysis, employs the GUI feature of MATLAB. Three objective functions are formulated into a single optimisation problem and solved with fuzzy genetic algorithm to simultaneously obtain DG optimal size and location. The accuracy and reliability of the developed tool was validated using several radial test systems, and the results obtained are evaluated against the existing similar package cited in the literature, which are impressive and computationally efficient.

  3. Optimisation of production from an oil-reservoir using augmented Lagrangian methods

    Energy Technology Data Exchange (ETDEWEB)

    Doublet, Daniel Christopher

    2007-07-01

    This work studies the use of augmented Lagrangian methods for water flooding production optimisation from an oil reservoir. Commonly, water flooding is used as a means to enhance oil recovery, and due to heterogeneous rock properties, water will flow with different velocities throughout the reservoir. Due to this, water breakthrough can occur when great regions of the reservoir are still unflooded so that much of the oil may become 'trapped' in the reservoir. To avoid or reduce this problem, one can control the production so that the oil recovery rate is maximised, or alternatively the net present value (NPV) of the reservoir is maximised. We have considered water flooding, using smart wells. Smart wells with down-hole valves gives us the possibility to control the injection/production at each of the valve openings along the well, so that it is possible to control the flowregime. One can control the injection/production at all valve openings, and the setting of the valves may be changed during the production period, which gives us a great deal of control over the production and we want to control the injection/ production so that the profit obtained from the reservoir is maximised. The problem is regarded as an optimal control problem, and it is formulated as an augmented Lagrangian saddle point problem. We develop a method for optimal control based on solving the Karush-Kuhn-Tucker conditions for the augmented Lagrangian functional, a method, which to my knowledge has not been presented in the literature before. The advantage of this method is that we do not need to solve the forward problem for each new estimate of the control variables, which reduces the computational effort compared to other methods that requires the solution of the forward problem every time we find a new estimate of the control variables, such as the adjoint method. We test this method on several examples, where it is compared to the adjoint method. Our numerical experiments show

  4. Efficient numerical method for district heating system hydraulics

    International Nuclear Information System (INIS)

    Stevanovic, Vladimir D.; Prica, Sanja; Maslovaric, Blazenka; Zivkovic, Branislav; Nikodijevic, Srdjan

    2007-01-01

    An efficient method for numerical simulation and analyses of the steady state hydraulics of complex pipeline networks is presented. It is based on the loop model of the network and the method of square roots for solving the system of linear equations. The procedure is presented in the comprehensive mathematical form that could be straightforwardly programmed into a computer code. An application of the method to energy efficiency analyses of a real complex district heating system is demonstrated. The obtained results show a potential for electricity savings in pumps operation. It is shown that the method is considerably more effective than the standard Hardy Cross method still widely used in engineering practice. Because of the ease of implementation and high efficiency, the method presented in this paper is recommended for hydraulic steady state calculations of complex networks

  5. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  6. Turbulence optimisation in stellarator experiments

    Energy Technology Data Exchange (ETDEWEB)

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  7. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  8. Performance assessment and optimisation of a large information system by combined customer relationship management and resilience engineering: a mathematical programming approach

    Science.gov (United States)

    Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.

    2017-10-01

    ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.

  9. Model-based online optimisation. Pt. 1: active learning; Modellbasierte Online-Optimierung moderner Verbrennungsmotoren. T. 1: Aktives Lernen

    Energy Technology Data Exchange (ETDEWEB)

    Poland, J.; Knoedler, K.; Zell, A. [Tuebingen Univ. (Germany). Lehrstuhl fuer Rechnerarchitektur; Fleischhauer, T.; Mitterer, A.; Ullmann, S. [BMW Group (Germany)

    2003-05-01

    This two-part article presents the model-based optimisation algorithm ''mbminimize''. It was developed in a corporate project of the University Tuebingen and the BMW Group for the purpose of optimising internal combustion engines online on the engine test bed. The first part concentrates on the basic algorithmic design, as well as on modelling, experimental design and active learning. The second part will discuss strategies for dealing with limits such as knocking. (orig.) [German] Dieser zweiteilige Beitrag stellt den modellbasierten Optimierungsalgorithmus ''mbminimize'' vor, der in Kooperation von der Universitaet Tuebingen und der BMW Group fuer die Online-Optimierung von Verbrennungsmotoren entwickelt wurde. Der vorliegende erste Teil konzentriert sich auf das grundlegende algorithmische Design, auf Modellierung, Versuchsplanung und aktives Lernen. Der zweite Teil diskutiert Strategien zur Behandlung von Limits wie Motorklopfen.

  10. Efficient Kinect Sensor-Based Reactive Path Planning Method for Autonomous Mobile Robots in Dynamic Environments

    Energy Technology Data Exchange (ETDEWEB)

    Tuvshinjargal, Doopalam; Lee, Deok Jin [Kunsan National University, Gunsan (Korea, Republic of)

    2015-06-15

    In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.

  11. Efficient Kinect Sensor-Based Reactive Path Planning Method for Autonomous Mobile Robots in Dynamic Environments

    International Nuclear Information System (INIS)

    Tuvshinjargal, Doopalam; Lee, Deok Jin

    2015-01-01

    In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments

  12. Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =

    Science.gov (United States)

    Ruby, Margaux

    Face au rechauffement climatique, les besoins de trouver des solutions pour reduire les emissions de CO2 sont urgentes. L'optimisation des trajectoires est un des moyens pour reduire la consommation de carburant lors d'un vol. Afin de determiner la trajectoire optimale de l'avion, differents algorithmes ont ete developpes. Le but de ces algorithmes est de reduire au maximum le cout total d'un vol d'un avion qui est directement lie a la consommation de carburant et au temps de vol. Un autre parametre, nomme l'indice de cout est considere dans la definition du cout de vol. La consommation de carburant est fournie via des donnees de performances pour chaque phase de vol. Dans le cas de ce memoire, les phases d'un vol complet, soit, une phase de montee, une phase de croisiere et une phase de descente, sont etudies. Des " marches de montee " etaient definies comme des montees de 2 000ft lors de la phase de croisiere sont egalement etudiees. L'algorithme developpe lors de ce memoire est un metaheuristique, nomme la recherche de l'harmonie, qui, concilie deux types de recherches : la recherche locale et la recherche basee sur une population. Cet algorithme se base sur l'observation des musiciens lors d'un concert, ou plus exactement sur la capacite de la musique a trouver sa meilleure harmonie, soit, en termes d'optimisation, le plus bas cout. Differentes donnees d'entrees comme le poids de l'avion, la destination, la vitesse de l'avion initiale et le nombre d'iterations doivent etre, entre autre, fournies a l'algorithme pour qu'il soit capable de determiner la solution optimale qui est definie comme : [Vitesse de montee, Altitude, Vitesse de croisiere, Vitesse de descente]. L'algorithme a ete developpe a l'aide du logiciel MATLAB et teste pour plusieurs destinations et plusieurs poids pour un seul type d'avion. Pour la validation, les resultats obtenus par cet algorithme ont ete compares dans un premier temps aux resultats obtenus suite a une recherche exhaustive qui a

  13. Formulation des betons autopla~ants : Optimisation du squelette ...

    African Journals Online (AJOL)

    Formulation des betons autopla~ants : Optimisation du squelette granulaire par la methode graphique de Dreux - Gorisse. Fatiha Boumaza - Zeraoulia* & Mourad Behim. Laboratoire Materiaux, Geo - Materiaux et Environnement - Departement de Genie Civil. Universite Badji Mokhtar Annaba - BP 12, 23000 Annaba - ...

  14. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  15. Thermodynamic optimisation and analysis of four Kalina cycle layouts for high temperature applications

    DEFF Research Database (Denmark)

    Modi, Anish; Haglind, Fredrik

    2015-01-01

    The Kalina cycle has seen increased interest in the last few years as an efficient alternative to the conventional steam Rankine cycle. However, the available literature gives little information on the algorithms to solve or optimise this inherently complex cycle. This paper presents a detailed a...

  16. Optimisation Of Process Parameters In High Energy Mixing As A Method Of Cohesive Powder Flowability Improvement

    Directory of Open Access Journals (Sweden)

    Leś Karolina

    2015-12-01

    Full Text Available Flowability of fine, highly cohesive calcium carbonate powder was improved using high energy mixing (dry coating method consisting in coating of CaCO3 particles with a small amount of Aerosil nanoparticles in a planetary ball mill. As measures of flowability the angle of repose and compressibility index were used. As process variables the mixing speed, mixing time, and the amount of Aerosil and amount of isopropanol were chosen. To obtain optimal values of the process variables, a Response Surface Methodology (RSM based on Central Composite Rotatable Design (CCRD was applied. To match the RSM requirements it was necessary to perform a total of 31 experimental tests needed to complete mathematical model equations. The equations that are second-order response functions representing the angle of repose and compressibility index were expressed as functions of all the process variables. Predicted values of the responses were found to be in a good agreement with experimental values. The models were presented as 3-D response surface plots from which the optimal values of the process variables could be correctly assigned. The proposed, mechanochemical method of powder treatment coupled with response surface methodology is a new, effective approach to flowability of cohesive powder improvement and powder processing optimisation.

  17. Risk-informed optimisation of railway tracks inspection and maintenance procedures

    International Nuclear Information System (INIS)

    Podofillini, Luca; Zio, Enrico; Vatn, Jorn

    2006-01-01

    Nowadays, efforts are being made by the railway industry for the application of reliability-based and risk-informed approaches to maintenance optimisation of railway infrastructures, with the aim of reducing the operation and maintenance expenditures while still assuring high safety standards. In particular, in this paper, we address the use of ultrasonic inspection cars and develop a methodology for the determination of an optimal strategy for their use. A model is developed to calculate the risks and costs associated with an inspection strategy, giving credit to the realistic issues of the rail failure process and including the actual inspection and maintenance procedures followed by the railway company. A multi-objective optimisation viewpoint is adopted in an effort to optimise inspection and maintenance procedures with respect to both economical and safety-related aspects. More precisely, the objective functions here considered are such to drive the search towards solutions characterized by low expenditures and low derailment probability. The optimisation is performed by means of a genetic algorithm. The work has been carried out within a study of the Norwegian National Rail Administration (Jernbaneverket)

  18. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  19. Computationally efficient dynamic modeling of robot manipulators with multiple flexible-links using acceleration-based discrete time transfer matrix method

    DEFF Research Database (Denmark)

    Zhang, Xuping; Sørensen, Rasmus; RahbekIversen, Mathias

    2018-01-01

    This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations, and then is linea......This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations......, and then is linearized based on the acceleration-based state vector. The transfer matrices for each type of components/elements are developed, and used to establish the system equations of a flexible robot manipulator by concatenating the state vector from the base to the end-effector. With this strategy, the size...... manipulators, and only involves calculating and transferring component/element dynamic equations that have small size. The numerical simulations and experimental testing of flexible-link manipulators are conducted to validate the proposed methodologies....

  20. Optimised cantilever biosensor with piezoresistive read-out

    DEFF Research Database (Denmark)

    Rasmussen, Peter; Thaysen, J.; Hansen, Ole

    2003-01-01

    We present a cantilever-based biochemical sensor with piezoresistive read-out which has been optimised for measuring surface stress. The resistors and the electrical wiring on the chip are encapsulated in low-pressure chemical vapor deposition (LPCVD) silicon nitride, so that the chip is well sui...

  1. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Science.gov (United States)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm

  2. Retinal Vessel Segmentation Based on Primal-Dual Asynchronous Particle Swarm Optimisation (pdAPSO Algorithm

    Directory of Open Access Journals (Sweden)

    E. G. Dada

    2017-04-01

    Full Text Available Acute damage to the retina vessel has been identified to be main reason for blindness and impaired vision all over the world. A timely detection and control of these illnesses can greatly decrease the number of loss of sight cases. Developing a high performance unsupervised retinal vessel segmentation technique poses an uphill task. This paper presents study on the Primal-Dual Asynchronous Particle Swarm Optimisation (pdAPSO method for the segmentation of retinal vessels. A maximum average accuracy rate 0.9243 with an average specificity of sensitivity rate of 0.9834 and average sensitivity rate of 0.5721 were achieved on DRIVE database. The proposed method produces higher mean sensitivity and accuracy rates in the same range of very good specificity.

  3. The optimisation study of tbp synthesis process by phosphoric acid

    International Nuclear Information System (INIS)

    Amedjkouh, A.; Attou, M.; Azzouz, A.; Zaoui, B.

    1995-07-01

    The present work deals with the optimisation study of TBP synthesis process by phosphoric acid. This way of synthesis is more advantageous than POCL3 or P2O5 as phosphatant agents. these latters are toxic and dangerous for the environnement. The optimisation study is based on a series of 16 experiences taking into account the range of variation of the following parameters : temperature, pressure, reagents mole ratio, promoter content. the yield calculation is based on the randomisation of an equation including all parameters. the resolution of this equation gave a 30% TBP molar ratio. this value is in agreement with that of experimental data

  4. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    Science.gov (United States)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  5. Optimal risky bidding strategy for a generating company by self-organising hierarchical particle swarm optimisation

    International Nuclear Information System (INIS)

    Boonchuay, Chanwit; Ongsakul, Weerakorn

    2011-01-01

    In this paper, an optimal risky bidding strategy for a generating company (GenCo) by self-organising hierarchical particle swarm optimisation with time-varying acceleration coefficients (SPSO-TVAC) is proposed. A significant risk index based on mean-standard deviation ratio (MSR) is maximised to provide the optimal bid prices and quantities. The Monte Carlo (MC) method is employed to simulate rivals' behaviour in competitive environment. Non-convex operating cost functions of thermal generating units and minimum up/down time constraints are taken into account. The proposed bidding strategy is implemented in a multi-hourly trading in a uniform price spot market and compared to other particle swarm optimisation (PSO). Test results indicate that the proposed SPSO-TVAC approach can provide a higher MSR than the other PSO methods. It is potentially applicable to risk management of profit variation of GenCo in spot market.

  6. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    Science.gov (United States)

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  7. Effectiveness of an implementation optimisation intervention aimed at increasing parent engagement in HENRY, a childhood obesity prevention programme - the Optimising Family Engagement in HENRY (OFTEN) trial: study protocol for a randomised controlled trial.

    Science.gov (United States)

    Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia

    2017-01-24

    Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot

  8. Optimisation of Inulinase Production by Kluyveromyces bulgaricus

    Directory of Open Access Journals (Sweden)

    Darija Vranešić

    2002-01-01

    Full Text Available The present work is based on observation of the effects of pH and temperature of fermentation on the production of microbial enzyme inulinase by Kluyveromyces marxianus var. bulgaricus. Inulinase hydrolyzes inulin, a polysaccharide which can be isolated from plants such as Jerusalem artichoke, chicory or dahlia, and transformed into pure fructose or fructooligosaccharides. Fructooligosaccharides have great potential in food industry because they can be used as calorie-reduced compounds and noncariogenic sweeteners as well as soluble fibre and prebiotic compounds. Fructose formation from inulin is a single step enzymatic reaction and yields are up to 95 % the fructose. On the contrary, conventional fructose production from starch needs at least three enzymatic steps, yielding only 45 % of fructose. The process of inulinase production was optimised by using experimental design method. pH value of the cultivation medium showed to be the most significant variable and it should be maintained at optimum value of 3.6. The effect of temperature was slightly lower and optimal values were between 30 and 33 °C. At a low pH value of the cultivation medium, the microorganism was not able to producem enough enzyme and enzyme activities were low. Similar effect was caused by high temperature. The highest values of enzyme activities were achieved at optimal fermentation conditions and the values were: 100.16–124.36 IU/mL (with sucrose as substrate for determination of enzyme activity or 8.6–11.6 IU/mL (with inulin as substrate, respectively. The method of factorial design and response surface analysis makes it possible to study several factors simultaneously, to quantify the individual effect of each factor and to investigate their possible interactions. As a comparison to this method, optimisation of a physiological enzyme activity model depending on pH and temperature was also studied.

  9. High efficiency USC power plant - present status and future potential

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R [Faelleskemikerne I/S Fynsvaerket (Denmark); Hald, J [Elsam/Elkraft/TU Denmark (Denmark)

    1999-12-31

    Increasing demand for energy production with low impact on the environment and minimised fuel consumption can be met with high efficient coal fired power plants with advanced steam parameters. An important key to this improvement is the development of high temperature materials with optimised mechanical strength. Based on the results of more than ten years of development a coal fired power plant with an efficiency above 50 % can now be realised. Future developments focus on materials which enable an efficiency of 52-55 %. (orig.) 25 refs.

  10. High efficiency USC power plant - present status and future potential

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R. [Faelleskemikerne I/S Fynsvaerket (Denmark); Hald, J. [Elsam/Elkraft/TU Denmark (Denmark)

    1998-12-31

    Increasing demand for energy production with low impact on the environment and minimised fuel consumption can be met with high efficient coal fired power plants with advanced steam parameters. An important key to this improvement is the development of high temperature materials with optimised mechanical strength. Based on the results of more than ten years of development a coal fired power plant with an efficiency above 50 % can now be realised. Future developments focus on materials which enable an efficiency of 52-55 %. (orig.) 25 refs.

  11. Operational Radiological Protection and Aspects of Optimisation

    International Nuclear Information System (INIS)

    Lazo, E.; Lindvall, C.G.

    2005-01-01

    Since 1992, the Nuclear Energy Agency (NEA), along with the International Atomic Energy Agency (IAEA), has sponsored the Information System on Occupational Exposure (ISOE). ISOE collects and analyses occupational exposure data and experience from over 400 nuclear power plants around the world and is a forum for radiological protection experts from both nuclear power plants and regulatory authorities to share lessons learned and best practices in the management of worker radiation exposures. In connection to the ongoing work of the International Commission on Radiological Protection (ICRP) to develop new recommendations, the ISOE programme has been interested in how the new recommendations would affect operational radiological protection application at nuclear power plants. Bearing in mind that the ICRP is developing, in addition to new general recommendations, a new recommendation specifically on optimisation, the ISOE programme created a working group to study the operational aspects of optimisation, and to identify the key factors in optimisation that could usefully be reflected in ICRP recommendations. In addition, the Group identified areas where further ICRP clarification and guidance would be of assistance to practitioners, both at the plant and the regulatory authority. The specific objective of this ISOE work was to provide operational radiological protection input, based on practical experience, to the development of new ICRP recommendations, particularly in the area of optimisation. This will help assure that new recommendations will best serve the needs of those implementing radiation protection standards, for the public and for workers, at both national and international levels. (author)

  12. Site-specific optimisation of the countermeasure structure on rehabilitation of radioactive contaminated territories

    Energy Technology Data Exchange (ETDEWEB)

    Yatsalo, B.I.; Okhrimenko, D.V.; Lisyanski, B.G.; Okhrimenko, I.V.; Mirzeabassov, O.A. [Obninsk Institute of Nuclear Power Engineering, Obninsk, Kaluga (Russian Federation)

    2000-05-01

    The use of 'soft' countermeasures (CMs) (agricultural CMs, some administrative ones, except for relocation/resettlement and some 'strong' measures on restriction of living conditions) allows considering the radiological and economic parameters for assessing their effectiveness. In this case cost-benefit analysis (CBA) or some its modification are used. However, the determination of various radiological and economic parameters (and their combination) is not enough for making final decisions on countermeasure implementation. All radiological, ecological and economic characteristics and other expert knowledge, corresponding standards and regulations should be taken into account, and many of them may not be used very often in analytical methods directly. The approaches to evaluating strong' CMs are based, as a rule, on expert judgements (MAUA, M-Crit). However, in practice they can lead to any result given in advance or to a choice of an weighted solution which does not comply with opinions by most experts due to considerable range of expert opinions and subjective weights for chosen attributes/criteria. Implementation of CMs on rehabilitation of contaminated territories should be based on the radiation protection principles. However, these principles are declared only when realising CMs on rehabilitation of contaminated territories after the Chernobyl accident. In practice some national or departmental standards are used and principles of justification'/'optimisation' are not examined. Taking into consideration a complex character of tasks on CM analysis and comparison of various alternatives it is quite necessary to use up-to-date computer decision support systems (DSSs). One of the systems which is directly intended for site-specific rehabilitation of territories subjected to radioactive contamination after the Chernobyl accident is the PRANA DSS. A key block of PRANA is 'analysis and optimisation of CMs structure

  13. Site-specific optimisation of the countermeasure structure on rehabilitation of radioactive contaminated territories

    International Nuclear Information System (INIS)

    Yatsalo, B.I.; Okhrimenko, D.V.; Lisyanski, B.G.; Okhrimenko, I.V.; Mirzeabassov, O.A.

    2000-01-01

    The use of 'soft' countermeasures (CMs) (agricultural CMs, some administrative ones, except for relocation/resettlement and some 'strong' measures on restriction of living conditions) allows considering the radiological and economic parameters for assessing their effectiveness. In this case cost-benefit analysis (CBA) or some its modification are used. However, the determination of various radiological and economic parameters (and their combination) is not enough for making final decisions on countermeasure implementation. All radiological, ecological and economic characteristics and other expert knowledge, corresponding standards and regulations should be taken into account, and many of them may not be used very often in analytical methods directly. The approaches to evaluating strong' CMs are based, as a rule, on expert judgements (MAUA, M-Crit). However, in practice they can lead to any result given in advance or to a choice of an weighted solution which does not comply with opinions by most experts due to considerable range of expert opinions and subjective weights for chosen attributes/criteria. Implementation of CMs on rehabilitation of contaminated territories should be based on the radiation protection principles. However, these principles are declared only when realising CMs on rehabilitation of contaminated territories after the Chernobyl accident. In practice some national or departmental standards are used and principles of justification'/'optimisation' are not examined. Taking into consideration a complex character of tasks on CM analysis and comparison of various alternatives it is quite necessary to use up-to-date computer decision support systems (DSSs). One of the systems which is directly intended for site-specific rehabilitation of territories subjected to radioactive contamination after the Chernobyl accident is the PRANA DSS. A key block of PRANA is 'analysis and optimisation of CMs structure'. It is intended for : -determination of territories

  14. The optimization of treatment and management of schizophrenia in Europe (OPTiMiSE) trial

    DEFF Research Database (Denmark)

    Leucht, Stefan; Winter-van Rossum, Inge; Heres, Stephan

    2015-01-01

    Commission sponsored "Optimization of Treatment and Management of Schizophrenia in Europe" (OPTiMiSE) trial which aims to provide a treatment algorithm for patients with a first episode of schizophrenia. METHODS: We searched Pubmed (October 29, 2014) for randomized controlled trials (RCTs) that examined...... switching the drug in nonresponders to another antipsychotic. We described important methodological choices of the OPTiMiSE trial. RESULTS: We found 10 RCTs on switching antipsychotic drugs. No trial was conclusive and none was concerned with first-episode schizophrenia. In OPTiMiSE, 500 first episode...

  15. Optimising resource management in neurorehabilitation.

    Science.gov (United States)

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  16. Extending Particle Swarm Optimisers with Self-Organized Criticality

    DEFF Research Database (Denmark)

    Løvbjerg, Morten; Krink, Thiemo

    2002-01-01

    Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.......Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions....

  17. Regulation of Voltage and Frequency in Solid Oxide Fuel Cell-Based Autonomous Microgrids Using the Whales Optimisation Algorithm

    Directory of Open Access Journals (Sweden)

    Sajid Hussain Qazi

    2018-05-01

    Full Text Available This study explores the Whales Optimization Algorithm (WOA-based PI controller for regulating the voltage and frequency of an inverter-based autonomous microgrid (MG. The MG comprises two 50 kW DGs (solid oxide fuel cells, SOFCs interfaced using a power electronics-based voltage source inverter (VSI with a 120-kV conventional grid. Four PI controller schemes for the MG are implemented: (i stationary PI controller with fixed gain values (Kp and Ki, (ii PSO tuned PI controller, (iii GWO tuned PI controller, and (iv WOA tuned PI controller. The performance of these controllers is evaluated by monitoring the system voltage and frequency during the transition of MG operation mode and changes in the load. The MATLAB/SIMULINK tool is utilised to design the proposed model of grid-tied MG alongside the MATLAB m-file to apply an optimisation technique. The simulation results show that the WOA-based PI controller which optimises the control parameters, achieve 62.7% and 59% better results for voltage and frequency regulation, respectively. The eigenvalue analysis is also provided to check the stability of the proposed controller. Furthermore, the proposed system also satisfies the limits specified in IEEE-1547-2003 for voltage and frequency.

  18. Economic optimisation of a wind power plant for isolated locations

    International Nuclear Information System (INIS)

    Fortunato, B.; Mummolo, G.; Cavallera, G.

    1997-01-01

    This paper presents a model of a wind power plant for isolated locations composed of a vertical axis wind turbine connected to a self-excited induction generator operating at constant voltage and frequency; a back-up diesel generator and a battery system are moreover included in the system. Constant voltage and frequency are obtained only by controlling the generator appropriately. The control system is supposed to be optimised so that the system operates at the highest efficiency. In order to improve the total efficiency even further, a gear-box to vary the gear transmission ratio between the turbine and the generator has been considered. A ''Monte Carlo'' type simulation has been used to analyse the operation of that system over a one year period. The model is based on a probability density function of the wind speed derived by statistical data concerning a given location and on the probabilistic curve of the load required by an isolated location. The cost per kW h for different dimensions of the main components has been evaluated and the optimum configuration has been identified. (author)

  19. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  20. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Science.gov (United States)

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  1. Knowledge based decision making method for the selection of mixed refrigerant systems for energy efficient LNG processes

    International Nuclear Information System (INIS)

    Khan, Mohd Shariq; Lee, Sanggyu; Rangaiah, G.P.; Lee, Moonyong

    2013-01-01

    Highlights: • Practical method for finding optimum refrigerant composition is proposed for LNG plant. • Knowledge of boiling point differences in refrigerant component is employed. • Implementation of process knowledge notably makes LNG process energy efficient. • Optimization of LNG plant is more transparent using process knowledge. - Abstract: Mixed refrigerant (MR) systems are used in many industrial applications because of their high energy efficiency, compact design and energy-efficient heat transfer compared to other processes operating with pure refrigerants. The performance of MR systems depends strongly on the optimum refrigerant composition, which is difficult to obtain. This paper proposes a simple and practical method for selecting the appropriate refrigerant composition, which was inspired by (i) knowledge of the boiling point difference in MR components, and (ii) their specific refrigeration effect in bringing a MR system close to reversible operation. A feasibility plot and composite curves were used for full enforcement of the approach temperature. The proposed knowledge-based optimization approach was described and applied to a single MR and a propane precooled MR system for natural gas liquefaction. Maximization of the heat exchanger exergy efficiency was considered as the optimization objective to achieve an energy efficient design goal. Several case studies on single MR and propane precooled MR processes were performed to show the effectiveness of the proposed method. The application of the proposed method is not restricted to liquefiers, and can be applied to any refrigerator and cryogenic cooler where a MR is involved

  2. Analysis of recovery efficiency in high-temperature aquifer thermal energy storage: a Rayleigh-based method

    Science.gov (United States)

    Schout, Gilian; Drijver, Benno; Gutierrez-Neri, Mariene; Schotting, Ruud

    2014-01-01

    High-temperature aquifer thermal energy storage (HT-ATES) is an important technique for energy conservation. A controlling factor for the economic feasibility of HT-ATES is the recovery efficiency. Due to the effects of density-driven flow (free convection), HT-ATES systems applied in permeable aquifers typically have lower recovery efficiencies than conventional (low-temperature) ATES systems. For a reliable estimation of the recovery efficiency it is, therefore, important to take the effect of density-driven flow into account. A numerical evaluation of the prime factors influencing the recovery efficiency of HT-ATES systems is presented. Sensitivity runs evaluating the effects of aquifer properties, as well as operational variables, were performed to deduce the most important factors that control the recovery efficiency. A correlation was found between the dimensionless Rayleigh number (a measure of the relative strength of free convection) and the calculated recovery efficiencies. Based on a modified Rayleigh number, two simple analytical solutions are proposed to calculate the recovery efficiency, each one covering a different range of aquifer thicknesses. The analytical solutions accurately reproduce all numerically modeled scenarios with an average error of less than 3 %. The proposed method can be of practical use when considering or designing an HT-ATES system.

  3. Optimisation of decision making under uncertainty throughout field lifetime: A fractured reservoir example

    Science.gov (United States)

    Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin

    2016-10-01

    Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty

  4. Occupancy-based illumination control of LED lighting systems

    NARCIS (Netherlands)

    Caicedo Fernandez, D.R.; Pandharipande, A.; Leus, G.

    2011-01-01

    Light emitting diode (LED)-based systems are considered to be the future of lighting. We consider the problem of energy-efficient illumination control of such systems. Energy-efficient system design is based on two aspects: localised information on occupancy and optimisation of dimming levels of the

  5. Design of Circularly-Polarised, Crossed Drooping Dipole, Phased Array Antenna Using Genetic Algorithm Optimisation

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal

    2007-01-01

    A printed drooping dipole array is designed and constructed. The design is based on a genetic algorithm optimisation procedure used in conjunction with the software programme AWAS. By optimising the array G/T for specific combinations of scan angles and frequencies an optimum design is obtained...

  6. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement.

    Science.gov (United States)

    Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang

    2016-05-10

    The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.

  7. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  8. A biophysical approach to the optimisation of dendritic-tumour cell electrofusion

    International Nuclear Information System (INIS)

    Sukhorukov, Vladimir L.; Reuss, Randolph; Endter, Joerg M.; Fehrmann, Steffen; Katsen-Globa, Alisa; Gessner, Petra; Steinbach, Andrea; Mueller, Kilian J.; Karpas, Abraham; Zimmermann, Ulrich; Zimmermann, Heiko

    2006-01-01

    Electrofusion of tumour and dendritic cells (DCs) is a promising approach for production of DC-based anti-tumour vaccines. Although human DCs are well characterised immunologically, little is known about their biophysical properties, including dielectric and osmotic parameters, both of which are essential for the development of efficient electrofusion protocols. In the present study, human DCs from the peripheral blood along with a tumour cell line used as a model fusion partner were examined by means of time-resolved cell volumetry and electrorotation. Based on the biophysical cell data, the electrofusion protocol could be rapidly optimised with respect to the sugar composition of the fusion medium, duration of hypotonic treatment, frequency range for stable cell alignment, and field strengths of breakdown pulses triggering membrane fusion. The hypotonic electrofusion consistently gave a tumour-DC hybrid rate of up to 19%, as determined by counting dually labelled fluorescent hybrids in a microscope. This fusion rate is nearly twice as high as that usually reported in the literature for isotonic media. The experimental findings and biophysical approach presented here are generally useful for the development of efficient electrofusion protocols, especially for rare and valuable human cells

  9. The optimisation of a water distribution system using Bentley WaterGEMS software

    Directory of Open Access Journals (Sweden)

    Świtnicka Karolina

    2017-01-01

    Full Text Available The proper maintenance of water distribution systems (WDSs requires from operators multiple actions in order to ensure optimal functioning. Usually, all requirements should be adjusted simultaneously. Therefore, the decision-making process is often supported by multi-criteria optimisation methods. Significant improvements of exploitation conditions of WDSs functioning can be achieved by connecting small water supply networks into group systems. Among many potential tools supporting advanced maintenance and management of WDSs, significant improvements have tools that can find the optimal solution by the implemented mechanism of metaheuristic methods, such as the genetic algorithm. In this paper, an exemplary WDS functioning optimisation is presented, in relevance to a group water supply system. The action range of optimised parameters included: maximisation of water flow velocity, regulation of pressure head, minimisation of water retention time in a network (water age and minimisation of pump energy consumption. All simulations were performed in Bentley WaterGEMS software.

  10. Optimisation based design of a district energy system for an eco-town in the United Kingdom

    International Nuclear Information System (INIS)

    Weber, C.; Shah, N.

    2011-01-01

    The reduction of CO 2 emissions linked with human activities (mainly energy services and transport), together with the increased use of renewable energies, remain high priorities on various political agendas. However, considering the increased consumption of energy services (especially electricity), and the stochastic nature of some of the most promising renewable energies (wind for instance), the challenge is to find the optimal mix of technologies that will provide the energy services, without increasing the CO 2 emissions, but nonetheless ensuring reliability of supply. The focus of this paper is to present the DESDOP tool, based on mixed integer linear optimisation technics, that helps giving insight in the optimal mix of technologies that will simultaneously help decrease the emissions while at the same time guarantee resilience of supply. The results show that while it is not yet possible to avoid electricity from the grid completely (hence nuclear or fossil fuel), CO 2 reductions up to 20%, at no extra costs compared to the business-as-usual case, are easily achievable. -- Research highlights: → Mixed integer linear optimisation techniques are a powerful tool to design and optimise district energy systems. → Integrated energy conversion systems (especially combination of CHPs and heat-pumps) allow CO2 reductions for energy services of at least 20% at no extra-costs compared to business-as-usual (grid and boiler). → While the grid (hence nuclear and/or fossil fuels) cannot be avoided as long as electricity storage doesn't come of age, the criticism of anti-wind lobbyists regarding the ineffectiveness of wind power could not be verified.

  11. Numerical optimisation of an axial turbine; Numerische Optimierung einer Axialturbine

    Energy Technology Data Exchange (ETDEWEB)

    Welzel, B.

    1998-12-31

    The author presents a method for automatic shape optimisation of components with internal or external flow. The method combines a program for numerical calculation of frictional turbulent flow with an optimisation algorithm. Algorithms are a simplex search strategy and an evolution strategy. The shape of the component to be optimized is variable due to shape parameters modified by the algorithm. For each shape, a flow calculation is carried out on whose basis a functional value like performance, loss, lift or resistivity is calculated. For validation, the optimisation method is used in simple examples with known solutions. It is applied. It is applied to the components of a slow-running axial turbine. Components with accelerated and delayed rotationally symmetric flow and 2D blade profiles are optimized. [Deutsch] Es wird eine Methode zur automatischen Formoptimierung durchstroemter oder umstroemter Bauteile vorgestellt. Diese koppelt ein Programm zur numerischen Berechnung reibungsbehafteter turbulenter Stroemungen mit einem Optimierungsalgorithmus. Dabei kommen als Algorithmen eine Simplex-Suchstrategie und eine Evolutionsstrategie zum Einsatz. Die Form des zu optimierenden Koerpers ist durch Formparameter, die vom Algorithmus veraendert werden, variabel. Fuer jede Form wird eine Stroemungsberechnung durchgefuehrt und mit dieser ein Funktionswert wie Wirkungsgrad, Verlust, Auftrieb oder Widerstandskraft berechnet. Die Optimierungsmethode wird zur Validierung in einfachen Beispielen mit bekannter Loesung eingesetzt. Zur Anwendung kommt sie in den einzelnen Komponenten einer langsamlaeufigen Axialturbine. Es werden Bauteile mit beschleunigter und verzoegerter rotationssymmetrischer Stroemung und 2D-Schaufelprofile optimiert. (orig.)

  12. Development of methods for determination of PAH based on measured CO-content

    International Nuclear Information System (INIS)

    Ingman, Rolf; Schuster, Robert

    2001-02-01

    The aim of the project 'Development of methods for determination of PAH based on measured CO-content' is to investigate the possibility to develop a method for continuous optimisation of NO x -emissions by decreased air ratio, without significant increase of polyaromatic hydrocarbons such as PAH. The general idea has been to find a indirect online method to predict the emissions of heavier hydrocarbons by: - creating a correlation between the content of CO and PAH, - controlling the air ratio by the CO-content, and - integrating the calculated PAH-content from CO-content. Today many boilers are operated with a low air ratio to minimise the NO x content and the NO x -fee. A low ratio increases the risk of high CO contents in the flue gas as well as increased contents of VOC and PAH. Other boilers are operated with high air ratios in order to minimise the CO content, which in some cases will result in unnecessary high NO x emissions. One of the main difficulties in optimising the air ratio to the most environmental friendly level is the lack of a suitable and well proven PAH instrument. There are today no available instruments for instantaneous and continuous measurement of PAH. PAH is normally measured as an average value during a period of at least one hour. It is not possible to detect short peaks. The development of the CO-method has been based on data from a CFB-boiler in Korsta in Sundsvall (Vaermeforskrapport 541). The data shows a clear correlation between THC and CO. The correlation seems to be mostly dependent of moisture content and load. The development presented in the report shows that it is possible to find a method to predict the PAH content from the CO-content in the flue gas. The next phase aims to improve and implement the method, by measurements and adaptation in a plant. The practical use of the method is as a tool to optimise the emission of CO, NO x , THC and PAH and/or to predict the PAH-emission during continuous operation

  13. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    Directory of Open Access Journals (Sweden)

    Kian Sheng Lim

    2013-01-01

    Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  14. The optimisation of wedge filters in radiotherapy of the prostate

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony J.; Webb, Steve

    1995-01-01

    A treatment plan optimisation algorithm has been applied to 12 patients with early prostate cancer in order to determine the optimum beam-weights and wedge angles for a standard conformal three-field treatment technique. The optimisation algorithm was based on fast-simulated-annealing using a cost function designed to achieve a uniform dose in the planning-target-volume (PTV) and to minimise the integral doses to the organs-at-risk. The algorithm has been applied to standard conformal three-field plans created by an experienced human planner, and run in three PLAN MODES: (1) where the wedge angles were fixed by the human planner and only the beam-weights were optimised; (2) where both the wedge angles and beam-weights were optimised; and (3) where both the wedge angles and beam-weights were optimised and a non-uniform dose was prescribed to the PTV. In the latter PLAN MODE, a uniform 100% dose was prescribed to all of the PTV except for that region that overlaps with the rectum where a lower (e.g., 90%) dose was prescribed. The resulting optimised plans have been compared with those of the human planner who found beam-weights by conventional forward planning techniques. Plans were compared on the basis of dose statistics, normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results of the comparison showed that all three PLAN MODES produced plans with slightly higher TCP for the same rectal NTCP, than the human planner. The best results were observed for PLAN MODE 3, where an average increase in TCP of 0.73% (± 0.20, 95% confidence interval) was predicted by the biological models. This increase arises from a beneficial dose gradient which is produced across the tumour. Although the TCP gain is small it comes with no increase in treatment complexity, and could translate into increased cures given the large numbers of patients being referred. A study of the beam-weights and wedge angles chosen by the optimisation algorithm revealed

  15. An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)

    2017-02-15

    We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.

  16. Comparison of elution efficiency of 99Mo/99mTc generator using theoretical and a free web based software method

    International Nuclear Information System (INIS)

    Kiran Kumar, J.K.; Sharma, S.; Chakraborty, D.; Singh, B.; Bhattacharaya, A.; Mittal, B.R.; Gayana, S.

    2010-01-01

    Full text: Generator is constructed on the principle of decay growth relationship between a long lived parent radionuclide and short lived daughter radionuclide. Difference in chemical properties of daughter and parent radionuclide helps in efficient separation of the two radionuclides. Aim and Objectives: The present study was designed to calculate the elution efficiency of the generator using the traditional formula based method and free web based software method. Materials and Methods: 99 Mo/ 99m Tc MON.TEK (Monrol, Gebze) generator and sterile 0.9% NaCl vial and vacuum vial in the lead shield were used for the elution. A new 99 Mo/ 99m Tc generator (calibrated activity 30GBq) calibrated for thursday was received on monday morning in our department. Generator was placed behind lead bricks in fume hood. The rubber plugs of both vacuum and 0.9% NaCl vial were wiped with 70% isopropyl alcohol swabs. Vacuum vial placed inside the lead shield was inserted in the vacuum position simultaneously 10 ml NaCl vial was inserted in the second slot. After 1-2 min vacuum vial was removed without moving the emptied 0.9%NaCl vial. The vacuum slot was covered with another sterile vial to maintain sterility. The RAC was measured in the calibrated dose calibrator (Capintec, 15 CRC). The elution efficiency was calculated theoretically and using free web based software (Apache Web server (www.apache.org) and PHP (www.php.net). Web site of the Italian Association of Nuclear Medicine and Molecular Imaging (www.aimn.it). Results: The mean elution efficiency calculated by theoretical method was 93.95% +0.61. The mean elution efficiency as calculated by the software was 92.85% + 0.89. There was no statistical difference in both the methods. Conclusion: The free web based software provides precise and reproducible results and thus saves time and mathematical calculation steps. This enables a rational use of available activity and also enabling a selection of the type and number of

  17. Optimisation of the parameters of a pump chamber for solid-state lasers with diode pumping by the optical boiler method

    Energy Technology Data Exchange (ETDEWEB)

    Kiyko, V V; Kislov, V I; Ofitserov, E N; Suzdal' tsev, A G [A M Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation)

    2015-06-30

    A pump chamber of the optical boiler type for solid-state lasers with transverse laser diode pumping is studied theoretically and experimentally. The pump chamber parameters are optimised using the geometrical optics approximation for the pump radiation. According to calculations, the integral absorption coefficient of the active element at a wavelength of 808 nm is 0.75 – 0.8 and the relative inhomogeneity of the pump radiation distribution over the active element volume is 17% – 19%. The developed pump chamber was used in a Nd:YAG laser. The maximum cw output power at a wavelength of 1064 nm was ∼480 W at the optical efficiency up to 19.6%, which agrees with theoretical estimates. (lasers)

  18. A knowledge-based control system for air-scour optimisation in membrane bioreactors.

    Science.gov (United States)

    Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I

    2011-01-01

    Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation.

  19. Analyzing parameters optimisation in minimising warpage on side arm using response surface methodology (RSM)

    Science.gov (United States)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    This paper presents a systematic methodology to analyse the warpage of the side arm part using Autodesk Moldflow Insight software. Response Surface Methodology (RSM) was proposed to optimise the processing parameters that will result in optimal solutions by efficiently minimising the warpage of the side arm part. The variable parameters considered in this study was based on most significant parameters affecting warpage stated by previous researchers, that is melt temperature, mould temperature and packing pressure while adding packing time and cooling time as these is the commonly used parameters by researchers. The results show that warpage was improved by 10.15% and the most significant parameters affecting warpage are packing pressure.

  20. Image quality and dose optimisation for infant CT using a paediatric phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, Jack W.; Phelps, Andrew S.; Courtier, Jesse L.; Gould, Robert G.; MacKenzie, John D. [University of California, San Francisco, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)

    2016-05-15

    To optimise image quality and reduce radiation exposure for infant body CT imaging. An image quality CT phantom was created to model the infant body habitus. Image noise, spatial resolution, low contrast detectability and tube current modulation (TCM) were measured after adjusting CT protocol parameters. Reconstruction method (FBP, hybrid iterative and model-based iterative), image quality reference parameter, helical pitch and beam collimation were systematically investigated for their influence on image quality and radiation output. Both spatial and low contrast resolution were significantly improved with model-based iterative reconstruction (p < 0.05). A change in the helical pitch from 0.969 to 1.375 resulted in a 23 % reduction in total TCM, while a change in collimation from 20 to 40 mm resulted in a 46 % TCM reduction. Image noise and radiation output were both unaffected by changes in collimation, while an increase in pitch enabled a dose length product reduction of ∝6 % at equivalent noise. An optimised protocol with ∝30 % dose reduction was identified using model-based iterative reconstruction. CT technology continues to evolve and require protocol redesign. This work provides an example of how an infant-specific phantom is essential for leveraging this technology to maintain image quality while reducing radiation exposure. (orig.)

  1. Workflow optimisation for multimodal imaging procedures: a case of combined X-ray and MRI-guided TACE.

    Science.gov (United States)

    Fernández-Gutiérrez, Fabiola; Wolska-Krawczyk, Malgorzata; Buecker, Arno; Houston, J Graeme; Melzer, Andreas

    2017-02-01

    This study presents a framework for workflow optimisation of multimodal image-guided procedures (MIGP) based on discrete event simulation (DES). A case of a combined X-Ray and magnetic resonance image-guided transarterial chemoembolisation (TACE) is presented to illustrate the application of this method. We used a ranking and selection optimisation algorithm to measure the performance of a number of proposed alternatives to improve a current scenario. A DES model was implemented with detail data collected from 59 TACE procedures and durations of magnetic resonance imaging (MRI) diagnostic procedures usually performed in a common MRI suite. Fourteen alternatives were proposed and assessed to minimise the waiting times and improve workflow. Data analysis observed an average of 20.68 (7.68) min of waiting between angiography and MRI for TACE patients in 71.19% of the cases. Following the optimisation analysis, an alternative was identified to reduce waiting times in angiography suite up to 48.74%. The model helped to understand and detect 'bottlenecks' during multimodal TACE procedures, identifying a better alternative to the current workflow and reducing waiting times. Simulation-based workflow analysis provides a cost-effective way to face some of the challenges of introducing MIGP in clinical radiology, highligthed in this study.

  2. A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm

    Science.gov (United States)

    Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman

    2014-01-01

    In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109

  3. Pre-operative optimisation of lung function

    Directory of Open Access Journals (Sweden)

    Naheed Azhar

    2015-01-01

    Full Text Available The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function.

  4. Plant-wide performance optimisation – The refrigeration system case

    DEFF Research Database (Denmark)

    Green, Torben; Razavi-Far, Roozbeh; Izadi-Zamanabadi, Roozbeh

    2012-01-01

    applicationsin the process industry. The paper addresses the fact that dynamic performance of the system is important, to ensure optimal changes between different operation conditions. To enable optimisation of the dynamic controller behaviour a method for designing the required excitation signal is presented...

  5. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    Directory of Open Access Journals (Sweden)

    Franz Konstantin Fuss

    2013-01-01

    Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  6. Radiation protection optimisation techniques and their application in industry

    Energy Technology Data Exchange (ETDEWEB)

    Lefaure, C

    1997-12-31

    Since the International Commission on Radiation Protection (ICRP) recommendation 60, the optimisation principle appears to be the core of the radiation protection system. In practice applying it, means implementing an approach both predictive and evolutionary - that relies essentially on a prudent and responsible state of mind. the formal expression of this process, called optimization procedure, implies and indispensable tool for its implementation: the system of monetary values for the unit of collective dose. During the last few years, feed ALARA principle means that a global work management approach must be adopted, considering together all factors contributing to radiation dose. In the nuclear field, the ALARA approach appears to be more successful when implemented in the framework of a managerial approach through structure ALARA programmes. Outside the nuclear industry it is necessary to clearly define priorities through generic optimisation studies and ALARA audits. At the international level much efforts remain to be done to expand efficiently the ALARA process to internal exposure as well as to public exposure. (author) 2 graphs, 5 figs., 3 tabs.

  7. Radiation protection optimisation techniques and their application in industry

    International Nuclear Information System (INIS)

    Lefaure, C.

    1996-01-01

    Since the International Commission on Radiation Protection (ICRP) recommendation 60, the optimisation principle appears to be the core of the radiation protection system. In practice applying it, means implementing an approach both predictive and evolutionary - that relies essentially on a prudent and responsible state of mind. the formal expression of this process, called optimization procedure, implies and indispensable tool for its implementation: the system of monetary values for the unit of collective dose. During the last few years, feed ALARA principle means that a global work management approach must be adopted, considering together all factors contributing to radiation dose. In the nuclear field, the ALARA approach appears to be more successful when implemented in the framework of a managerial approach through structure ALARA programmes. Outside the nuclear industry it is necessary to clearly define priorities through generic optimisation studies and ALARA audits. At the international level much efforts remain to be done to expand efficiently the ALARA process to internal exposure as well as to public exposure. (author)

  8. Radiation protection optimisation techniques and their application in industry

    Energy Technology Data Exchange (ETDEWEB)

    Lefaure, C

    1996-12-31

    Since the International Commission on Radiation Protection (ICRP) recommendation 60, the optimisation principle appears to be the core of the radiation protection system. In practice applying it, means implementing an approach both predictive and evolutionary - that relies essentially on a prudent and responsible state of mind. the formal expression of this process, called optimization procedure, implies and indispensable tool for its implementation: the system of monetary values for the unit of collective dose. During the last few years, feed ALARA principle means that a global work management approach must be adopted, considering together all factors contributing to radiation dose. In the nuclear field, the ALARA approach appears to be more successful when implemented in the framework of a managerial approach through structure ALARA programmes. Outside the nuclear industry it is necessary to clearly define priorities through generic optimisation studies and ALARA audits. At the international level much efforts remain to be done to expand efficiently the ALARA process to internal exposure as well as to public exposure. (author) 2 graphs, 5 figs., 3 tabs.

  9. The principle of optimisation: reasons for success and legal criticism

    International Nuclear Information System (INIS)

    Fernandez Regalado, Luis

    2008-01-01

    The International Commission on Radiological Protection (ICRP) has adopted new recommendations in 2007. In broad outlines they fundamentally continue the recommendations already approved in 1990 and later on. The principle of optimisation of protection, together with the principles of justification and dose limits, remains playing a key role of the ICRP recommendations, and it has so been for the last few years. This principle, somehow reinforced in the 2007 ICRP recommendations, has been incorporated into norms and legislation which have peacefully been in force in many countries all over the world. There are three main reasons to explain the success in the application of the principle of optimisation in radiological protection: First, the subjectivity of the sentence that embraces the principle of optimisation, 'As low as reasonably achievable' (ALARA), that allows different valid interpretations under different circumstances. Second, the pragmatism and adaptability of ALARA to all exposure situations. And third, the scientific humbleness which is behind the principle of optimisation, which makes a clear contrast with the old fashioned scientific positivism that enshrined scientist opinions. Nevertheless, from a legal point of view, there is some criticism cast over the principle of optimisation in radiological protection, where it has been transformed in compulsory norm. This criticism is based on two arguments: The lack of democratic participation in the process of elaboration of the norm, and the legal uncertainty associated to its application. Both arguments are somehow known by the ICRP which, on the one hand, has broadened the participation of experts, associations and the professional radiological protection community, increasing the transparency on how decisions on recommendations have been taken, and on the other hand, the ICRP has warned about the need for authorities to specify general criteria to develop the principle of optimisation in national

  10. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  11. Necessity and complexity of order picking routing optimisation based on pallet loading features

    Directory of Open Access Journals (Sweden)

    Bódis Tamás

    2017-12-01

    Full Text Available Order picking is the most labour-intensive and costly activity of warehouses. The main challenges of its improvement are the synchronisation of warehouse layout, storage assignment policy, routing, zoning, and batching. Furthermore, the competitiveness of the warehouse depends on how it adapts to the unique customer demands and product parameters and the changes. The operators usually have to manage the picking sequence based on best practices taking into consideration the product stacking factors and minimising the lead time. It is usually necessary to support the operators by making e ective decisions. Researchers of the pallet loading problem, bin packing problem, and order picking optimisation provide a wide horizon of solutions but their results are rarely synchronised.

  12. Drip and Surface Irrigation Water Use Efficiency of Tomato Crop Using Nuclear Techniques

    International Nuclear Information System (INIS)

    Mellouli, H.J.; Askri, H.; Mougou, R.

    2003-01-01

    Nations in the arid and semi-arid regions, especially the Arab countries, will have to take up an important challenge at the beginning of the 21 st century: increasing food production in order to realise food security for growing population, wile optimising the use of limited water resources. Using and adapting management techniques like the drip irrigation system could obtain the later. This would allow reduction in water losses by bare soil evaporation and deep percolation. Consequently improved water use efficiency could be realised. In this way, this work was conducted as a contribution on the Tunisian national programs on the optimisation of the water use. By mean a field study at Cherfech Experimental Station (30 km from Tunis), the effect of the irrigation system on the water use efficiency (WUE)-by a season tomato crop-was monitored by comparing three treatments receiving equivalent quantities of fertiliser: Fertigation, Drip irrigation and Furrow irrigation. Irrigation was scheduled by mean calculation of the water requirement based on the agro meteorological data, the plant physiological stage and the soil water characteristics (Clay Loam). The plant water consumption (ETR) was determined by using soil water balance method, where rainfall and amount of irrigation water readily measured

  13. Adaptive plan selection vs. re-optimisation in radiotherapy for bladder cancer: A dose accumulation comparison

    International Nuclear Information System (INIS)

    Vestergaard, Anne; Muren, Ludvig Paul; Søndergaard, Jimmi; Elstrøm, Ulrik Vindelev; Høyer, Morten; Petersen, Jørgen B.

    2013-01-01

    Purpose: Patients with urinary bladder cancer are obvious candidates for adaptive radiotherapy (ART) due to large inter-fractional variation in bladder volumes. In this study we have compared the normal tissue sparing potential of two ART strategies: daily plan selection (PlanSelect) and daily plan re-optimisation (ReOpt). Materials and methods: Seven patients with bladder cancer were included in the study. For the PlanSelect strategy, a patient-specific library of three plans was generated, and the most suitable plan based on the pre-treatment cone beam CT (CBCT) was selected. For the daily ReOpt strategy, plans were re-optimised based on the CBCT from each daily fraction. Bladder contours were propagated to the CBCT scan using deformable image registration (DIR). Accumulated dose distributions for the ART strategies as well as the non-adaptive RT were calculated. Results: A considerable sparing of normal tissue was achieved with both ART approaches, with ReOpt being the superior technique. Compared to non-adaptive RT, the volume receiving more than 57 Gy (corresponding to 95% of the prescribed dose) was reduced to 66% (range 48–100%) for PlanSelect and to 41% (range 33–50%) for ReOpt. Conclusion: This study demonstrated a considerable normal tissue sparing potential of ART for bladder irradiation, with clearly superior results by daily adaptive re-optimisation

  14. An Efficient Sleepy Algorithm for Particle-Based Fluids

    Directory of Open Access Journals (Sweden)

    Xiao Nie

    2014-01-01

    Full Text Available We present a novel Smoothed Particle Hydrodynamics (SPH based algorithm for efficiently simulating compressible and weakly compressible particle fluids. Prior particle-based methods simulate all fluid particles; however, in many cases some particles appearing to be at rest can be safely ignored without notably affecting the fluid flow behavior. To identify these particles, a novel sleepy strategy is introduced. By utilizing this strategy, only a portion of the fluid particles requires computational resources; thus an obvious performance gain can be achieved. In addition, in order to resolve unphysical clumping issue due to tensile instability in SPH based methods, a new artificial repulsive force is provided. We demonstrate that our approach can be easily integrated with existing SPH based methods to improve the efficiency without sacrificing visual quality.

  15. Vibration isolation design for periodically stiffened shells by the wave finite element method

    Science.gov (United States)

    Hong, Jie; He, Xueqing; Zhang, Dayi; Zhang, Bing; Ma, Yanhong

    2018-04-01

    Periodically stiffened shell structures are widely used due to their excellent specific strength, in particular for aeronautical and astronautical components. This paper presents an improved Wave Finite Element Method (FEM) that can be employed to predict the band-gap characteristics of stiffened shell structures efficiently. An aero-engine casing, which is a typical periodically stiffened shell structure, was employed to verify the validation and efficiency of the Wave FEM. Good agreement has been found between the Wave FEM and the classical FEM for different boundary conditions. One effective wave selection method based on the Wave FEM has thus been put forward to filter the radial modes of a shell structure. Furthermore, an optimisation strategy by the combination of the Wave FEM and genetic algorithm was presented for periodically stiffened shell structures. The optimal out-of-plane band gap and the mass of the whole structure can be achieved by the optimisation strategy under an aerodynamic load. Results also indicate that geometric parameters of stiffeners can be properly selected that the out-of-plane vibration attenuates significantly in the frequency band of interest. This study can provide valuable references for designing the band gaps of vibration isolation.

  16. Evaluation method of economic efficiency of industrial scale research based on an example of coking blend pre-drying technology

    Directory of Open Access Journals (Sweden)

    Żarczyński Piotr

    2017-01-01

    Full Text Available The research on new and innovative solutions, technologies and products carried out on an industrial scale is the most reliable method of verifying the validity of their implementation. The results obtained in this research method give almost one hundred percent certainty although, at the same time, the research on an industrial scale requires the expenditure of the highest amount of money. Therefore, this method is not commonly applied in the industrial practices. In the case of the decision to implement new and innovative technologies, it is reasonable to carry out industrial research, both because of the cognitive values and its economic efficiency. Research on an industrial scale may prevent investment failure as well as lead to an improvement of technologies, which is the source of economic efficiency. In this paper, an evaluation model of economic efficiency of the industrial scale research has been presented. This model is based on the discount method and the decision tree model. A practical application of this proposed evaluation model has been presented based on an example of the coal charge pre-drying technology before coke making in a coke oven battery, which may be preceded by industrial scale research on a new type of coal charge dryer.

  17. Geometric Generalisation of Surrogate Model-Based Optimisation to Combinatorial and Program Spaces

    Directory of Open Access Journals (Sweden)

    Yong-Hyuk Kim

    2014-01-01

    Full Text Available Surrogate models (SMs can profitably be employed, often in conjunction with evolutionary algorithms, in optimisation in which it is expensive to test candidate solutions. The spatial intuition behind SMs makes them naturally suited to continuous problems, and the only combinatorial problems that have been previously addressed are those with solutions that can be encoded as integer vectors. We show how radial basis functions can provide a generalised SM for combinatorial problems which have a geometric solution representation, through the conversion of that representation to a different metric space. This approach allows an SM to be cast in a natural way for the problem at hand, without ad hoc adaptation to a specific representation. We test this adaptation process on problems involving binary strings, permutations, and tree-based genetic programs.

  18. (MBO) algorithm in multi-reservoir system optimisation

    African Journals Online (AJOL)

    A comparative study of marriage in honey bees optimisation (MBO) algorithm in ... A practical application of the marriage in honey bees optimisation (MBO) ... to those of other evolutionary algorithms, such as the genetic algorithm (GA), ant ...

  19. Manufacturing footprint optimisation: a necessity for manufacturing network in changing business environment

    DEFF Research Database (Denmark)

    Yang, Cheng; Farooq, Sami; Johansen, John

    2010-01-01

    Facing the unpredictable financial crisis, optimising the footprint can be the biggest and most important transformation a manufacturer can undertake. In order to realise the optimisation, fundamental understanding on manufacturing footprint is required. Different elements of manufacturing...... footprint have been investigated independently in the existing literature. In this paper, for the purpose of relationship exploration between different elements, manufacturing footprints of three industrial companies are traced historically. Based on them, four reasons for the transformation...

  20. Chapter 1: Introduction. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Li, Michael [Dept. of Energy (DOE), Washington DC (United States). Office of Energy Efficiency and Renewable Energy; Haeri, Hossein [The Cadmus Group, Portland, OR (United States); Reynolds, Arlis [The Cadmus Group, Portland, OR (United States)

    2017-09-28

    This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objective for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.

  1. Optimisation of Investment Resources at Small Enterprises

    Directory of Open Access Journals (Sweden)

    Shvets Iryna B.

    2014-03-01

    Full Text Available The goal of the article lies in the study of the process of optimisation of the structure of investment resources, development of criteria and stages of optimisation of volumes of investment resources for small enterprises by types of economic activity. The article characterises the process of transformation of investment resources into assets and liabilities of the balances of small enterprises and conducts calculation of the structure of sources of formation of investment resources in Ukraine at small enterprises by types of economic activity in 2011. On the basis of the conducted analysis of the structure of investment resources of small enterprises the article forms main groups of criteria of optimisation in the context of individual small enterprises by types of economic activity. The article offers an algorithm and step-by-step scheme of optimisation of investment resources at small enterprises in the form of a multi-stage process of management of investment resources in the context of increase of their mobility and rate of transformation of existing resources into investments. The prospect of further studies in this direction is development of a structural and logic scheme of optimisation of volumes of investment resources at small enterprises.

  2. The efficiency of different estimation methods of hydro-physical limits

    Directory of Open Access Journals (Sweden)

    Emma María Martínez

    2012-12-01

    Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.

  3. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Science.gov (United States)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  4. Balanced the Trade-offs problem of ANFIS Using Particle Swarm Optimisation

    Directory of Open Access Journals (Sweden)

    Dian Palupi Rini

    2013-11-01

    Full Text Available Improving the approximation accuracy and interpretability of fuzzy systems is an important issue either in fuzzy systems theory or in its applications . It is known that simultaneous optimisation both issues was the trade-offs problem, but it will improve performance of the system and avoid overtraining of data. Particle swarm optimisation (PSO is part of evolutionary algorithm that is good candidate algorithms to solve multiple optimal solution and better global search space. This paper introduces an integration of PSO dan ANFIS for optimise its learning especially for tuning membership function parameters and finding the optimal rule for better classification. The proposed method has been tested on four standard dataset from UCI machine learning i.e. Iris Flower, Habermans Survival Data, Balloon and Thyroid dataset. The results have shown better classification using the proposed PSO-ANFIS and the time complexity has reduced accordingly.

  5. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Science.gov (United States)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  6. An environmentally-friendly fluorescent method for quantification of lipid contents in yeast

    DEFF Research Database (Denmark)

    Severo Poli, Jandora; Lützhøft, Hans-Christian Holten; Karakashev, Dimitar Borisov

    2014-01-01

    lipid and the calibration curve showed linearity (R2 = 0.994) between 0.50 and 25 mg/L. Compared with traditional gravimetric analysis, the developed method is much faster and uses less organic solvents. Lipid contents determined by fluorescence and gravimetry were the same for some strains......This study aimed at developing an efficient, fast and environmentally-friendly method to quantify neutral lipid contents in yeast. After optimising the fluorescence instrument parameters and influence of organic solvent concentrations, a new method to quantify neutral lipids in yeast based......, but for other strains the lipid contents determined by fluorescence were less. This new method will therefore be suitable for fast screening purposes....

  7. Optimisation of the microencapsulation of tuna oil in gelatin-sodium hexametaphosphate using complex coacervation.

    Science.gov (United States)

    Wang, Bo; Adhikari, Benu; Barrow, Colin J

    2014-09-01

    The microencapsulation of tuna oil in gelatin-sodium hexametaphosphate (SHMP) using complex coacervation was optimised for the stabilisation of omega-3 oils, for use as a functional food ingredient. Firstly, oil stability was optimised by comparing the accelerated stability of tuna oil in the presence of various commercial antioxidants, using a Rancimat™. Then zeta-potential (mV), turbidity and coacervate yield (%) were measured and optimised for complex coacervation. The highest yield of complex coacervate was obtained at pH 4.7 and at a gelatin to SHMP ratio of 15:1. Multi-core microcapsules were formed when the mixed microencapsulation system was cooled to 5 °C at a rate of 12 °C/h. Crosslinking with transglutaminase followed by freeze drying resulted in a dried powder with an encapsulation efficiency of 99.82% and a payload of 52.56%. Some 98.56% of the oil was successfully microencapsulated and accelerated stability using a Rancimat™ showed stability more than double that of non-encapsulated oil. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. An efficient fermentation method for the degradation of cyanogenic glycosides in flaxseed.

    Science.gov (United States)

    Wu, C-F; Xu, X-M; Huang, S-H; Deng, M-C; Feng, A-J; Peng, J; Yuan, J-P; Wang, J-H

    2012-01-01

    Recently, flaxseed has become increasingly popular in the health food market because it contains a considerable amount of specific beneficial nutrients such as lignans and omega-3 fatty acids. However, the presence of cyanogenic glycosides (CGs) in flaxseed severely limits the exploitation of its health benefits and nutritive value. We, therefore, developed an effective fermentation method, optimised by response surface methodology (RSM), for degrading CGs with an enzymatic preparation that includes 12.5% β-glucosidase and 8.9% cyanide hydratase. These optimised conditions resulted in a maximum CG degradation level of 99.3%, reducing the concentration of cyanide in the flaxseed power from 1.156 to 0.015 mg g(-1) after 48 h of fermentation. The avoidance of steam heat to evaporate hydrocyanic acid (HCN) results in lower energy consumption and no environmental pollution. In addition, the detoxified flaxseed retained the beneficial nutrients, lignans and fatty acids at the same level as untreated flaxseed, and this method could provide a new means of removing CGs from other edible plants, such as cassava, almond and sorghum by simultaneously expressing cyanide hydratase and β-glucosidase.

  9. An optimised method for the extraction of bacterial mRNA from plant roots infected with Escherichia coli O157:H7

    Directory of Open Access Journals (Sweden)

    Ashleigh eHolmes

    2014-06-01

    Full Text Available Analysis of microbial gene expression during host colonisation provides valuable information on the nature of interaction, beneficial or pathogenic, and the adaptive processes involved. Isolation of bacterial mRNA for in planta analysis can be challenging where host nucleic acid may dominate the preparation, or inhibitory compounds affect downstream analysis, e.g. qPCR, microarray or RNA-seq. The goal of this work was to optimise the isolation of bacterial mRNA of food-borne pathogens from living plants. Reported methods for recovery of phytopathogen-infected plant material, using hot phenol extraction and high concentration of bacterial inoculation or large amounts of infected tissues, were found to be inappropriate for plant roots inoculated with Escherichia coli O157:H7. The bacterial RNA yields were too low and increased plant material resulted in a dominance of plant RNA in the sample. To improve the yield of bacterial RNA and reduce the number of plants required, an optimised method was developed which combines bead beating with directed bacterial lysis using SDS and lysozyme. Inhibitory plant compounds, such as phenolics and polysaccharides, were counteracted with the addition of HMW-PEG and CTAB. The new method increased the total yield of bacterial mRNA substantially and allowed assessment of gene expression by qPCR. This method can be applied to other bacterial species associated with plant roots, and also in the wider context of food safety.

  10. Optimisation of Transmission Systems by use of Phase Shifting Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Verboomen, J

    2008-10-13

    In this thesis, transmission grids with PSTs (Phase Shifting Transformers) are investigated. In particular, the following goals are put forward: (a) The analysis and quantification of the impact of a PST on a meshed grid. This includes the development of models for the device; (b) The development of methods to obtain optimal coordination of several PSTs in a meshed grid. An objective function should be formulated, and an optimisation method must be adopted to solve the problem; and (c) The investigation of different strategies to use a PST. Chapter 2 gives a short overview of active power flow controlling devices. In chapter 3, a first step towards optimal PST coordination is taken. In chapter 4, metaheuristic optimisation methods are discussed. Chapter 5 introduces DC load flow approximations, leading to analytically closed equations that describe the relation between PST settings and active power flows. In chapter 6, some applications of the methods that are developed in earlier chapters are presented. Chapter 7 contains the conclusions of this thesis, as well as recommendations for future work.

  11. Active vibration reduction of a flexible structure bonded with optimised piezoelectric pairs using half and quarter chromosomes in genetic algorithms

    International Nuclear Information System (INIS)

    Daraji, A H; Hale, J M

    2012-01-01

    The optimal placement of sensors and actuators in active vibration control is limited by the number of candidates in the search space. The search space of a small structure discretized to one hundred elements for optimising the location of ten actuators gives 1.73 × 10 13 possible solutions, one of which is the global optimum. In this work, a new quarter and half chromosome technique based on symmetry is developed, by which the search space for optimisation of sensor/actuator locations in active vibration control of flexible structures may be greatly reduced. The technique is applied to the optimisation for eight and ten actuators located on a 500×500mm square plate, in which the search space is reduced by up to 99.99%. This technique helps for updating genetic algorithm program by updating natural frequencies and mode shapes in each generation to find the global optimal solution in a greatly reduced number of generations. An isotropic plate with piezoelectric sensor/actuator pairs bonded to its surface was investigated using the finite element method and Hamilton's principle based on first order shear deformation theory. The placement and feedback gain of ten and eight sensor/actuator pairs was optimised for a cantilever and clamped-clamped plate to attenuate the first six modes of vibration, using minimization of linear quadratic index as an objective function.

  12. Development, optimisation, and application of ICP-SFMS methods for the measurement of isotope ratios

    International Nuclear Information System (INIS)

    Stuerup, S.

    2000-07-01

    The measurement of isotopic composition and isotope ratios in biological and environmental samples requires sensitive, precise, and accurate analytical techniques. The analytical techniques used are traditionally based on mass spectrometry, among these techniques is the ICP-SFMS technique, which became commercially available in the mid 1990s. This technique is characterised by high sensitivity, low background, and the ability to separate analyte signals from spectral interferences. These features are beneficial for the measurement of isotope ratios and enable the measurement of isotope ratios of elements, which it has not previously been possible to measure due to either spectral interferences or poor sensitivity. The overall purpose of the project was to investigate the potential of the single detector ICP-SFMS technique for the measurement of isotope ratios in biological and environmental samples. One part of the work has focused on the fundamental aspects of the ICP-SFMS technique with special emphasize on the features important to the measurement of isotope ratios, while another part has focused on the development, optimisation and application of specific methods for the measurement of isotope ratios of elements of nutritional interest and radionuclides. The fundamental aspects of the ICP-SFMS technique were investigated theoretically and experimentally by the measurement of isotope ratios applying different experimental conditions. It was demonstrated that isotope ratios could be measured reliably using ICP-SFMS by educated choice of acquisition parameters, scanning mode, mass discrimination correction, and by eliminating the influence of detector dead time. Applying the knowledge gained through the fundamental study, ICP-SFMS methods for the measurement of isotope ratios of calcium, zinc, molybdenum and iron in human samples and a method for the measurement of plutonium isotope ratios and ultratrace levels of plutonium and neptunium in environmental samples

  13. A trap-based pulsed positron beam optimised for positronium laser spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, B. S., E-mail: ben.cooper.13@ucl.ac.uk; Alonso, A. M.; Deller, A.; Wall, T. E.; Cassidy, D. B. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)

    2015-10-15

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10{sup 5} positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10{sup 7} cm{sup −3} is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields.

  14. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  15. Dimensional ranges and rolling efficiency in a tandem cold rolling mill

    Energy Technology Data Exchange (ETDEWEB)

    Larkiola, J.

    1997-12-31

    In this work, physical models and a neural network theory have been combined in order to predict the properties of a steel strip and to optimise the process parameters in cold rolling. The prediction of the deformation resistance of the material and the friction parameter is based on the physical model presented by Bland, Ford and Ellis and artificial neural network computing (ANN). The accuracy of these models has been tested and proved by using a large amount of the measured data. With the aid of these models it has been shown that (a) the small change to the relative reduction distribution can have a clear effect upon the rolling efficiency, (b) the dimensional ranges of the tandem cold roll mill can be determined and optimised and (c) the possibility to cold roll a new product of new width, strength or thickness can be determined and the parameters of the tandem cold rolling process can be optimised. (orig.) 43 refs.

  16. Detecting Android Malwares with High-Efficient Hybrid Analyzing Methods

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2018-01-01

    Full Text Available In order to tackle the security issues caused by malwares of Android OS, we proposed a high-efficient hybrid-detecting scheme for Android malwares. Our scheme employed different analyzing methods (static and dynamic methods to construct a flexible detecting scheme. In this paper, we proposed some detecting techniques such as Com+ feature based on traditional Permission and API call features to improve the performance of static detection. The collapsing issue of traditional function call graph-based malware detection was also avoided, as we adopted feature selection and clustering method to unify function call graph features of various dimensions into same dimension. In order to verify the performance of our scheme, we built an open-access malware dataset in our experiments. The experimental results showed that the suggested scheme achieved high malware-detecting accuracy, and the scheme could be used to establish Android malware-detecting cloud services, which can automatically adopt high-efficiency analyzing methods according to the properties of the Android applications.

  17. Flow optimisation of a biomass mixer; Stroemungstechnische Optimierung eines Biomasse-Ruehrwerks

    Energy Technology Data Exchange (ETDEWEB)

    Casartelli, E.; Waser, R. [Hochschule fuer Technik und Architektur Luzern (HTA), Horw (Switzerland); Fankhauser, H. [Fankhauser Maschinenfabrik, Malters (Switzerland)

    2007-03-15

    This illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on the optimisation of a mixing system used in biomass reactors. Aim of this work was to improve the fluid dynamic qualities of the mixer in order to increase its efficiency while, at the same time, maintaining robustness and low price. Investigative work performed with CFD (Computational Fluid Dynamics) is reported on. CFD is quoted by the authors as being very effective in solving such optimisation problems as it is suited to flows that are not easily accessible for analysis. Experiments were performed on a fermenter / mixer model in order to confirm the computational findings. The results obtained with two and three-dimensional simulations are presented and discussed, as are those resulting from the tests with the 1:10 scale model of a digester. Initial tests with the newly developed mixer-propellers in a real-life biogas installation are reported on and further tests to be made are listed.

  18. An efficient multilevel optimization method for engineering design

    Science.gov (United States)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  19. MULTI-OBJECTIVE OPTIMISATION OF LASER CUTTING USING CUCKOO SEARCH ALGORITHM

    Directory of Open Access Journals (Sweden)

    M. MADIĆ

    2015-03-01

    Full Text Available Determining of optimal laser cutting conditions for improving cut quality characteristics is of great importance in process planning. This paper presents multi-objective optimisation of the CO2 laser cutting process considering three cut quality characteristics such as surface roughness, heat affected zone (HAZ and kerf width. It combines an experimental design by using Taguchi’s method, modelling the relationships between the laser cutting factors (laser power, cutting speed, assist gas pressure and focus position and cut quality characteristics by artificial neural networks (ANNs, formulation of the multiobjective optimisation problem using weighting sum method, and solving it by the novel meta-heuristic cuckoo search algorithm (CSA. The objective is to obtain optimal cutting conditions dependent on the importance order of the cut quality characteristics for each of four different case studies presented in this paper. The case studies considered in this study are: minimisation of cut quality characteristics with equal priority, minimisation of cut quality characteristics with priority given to surface roughness, minimisation of cut quality characteristics with priority given to HAZ, and minimisation of cut quality characteristics with priority given to kerf width. The results indicate that the applied CSA for solving the multi-objective optimisation problem is effective, and that the proposed approach can be used for selecting the optimal laser cutting factors for specific production requirements.

  20. Model-based monitoring, optimisation and cogeneration plant billing in heating power stations; Modellgestuetzte Ueberwachung, Optimierung und KWK Abrechnung in Heizkraftwerken

    Energy Technology Data Exchange (ETDEWEB)

    Deeskow, P. [STEAG KETEK IT GmbH, Oberhausen (Germany); Pawellek, R. [Sofbid GmbH, Zwingenberg (Germany)

    2005-07-01

    On the basis of thermodynamic modelling, efficient online systems can be constructed which provide multiple commercial uses for plant operation. Incipient failures are recognized earlier, so that countermeasures can be taken at an early stage and long-term maintenance measures can be planned. Performance can be optimised, and - last but not least - the multitude of processed data enables workflow analysis, e.g. for simplifying billing processes in secondary relational databases. Performance data are presented of two coal power plants with 350 MWel/250MWth and 450MWel/50MWth in which systems of this type have been in use for several years now. (orig.)

  1. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  2. Critical factors for optimising skill-grade-mix based on principles of Lean Management - a qualitative substudy

    Science.gov (United States)

    Inauen, Alice; Rettke, Horst; Fridrich, Annemarie; Spirig, Rebecca; Bauer, Georg F

    2017-01-01

    Background: Due to scarce resources in health care, staff deployment has to meet the demands. To optimise skill-grade-mix, a Swiss University Hospital initiated a project based on principles of Lean Management. The project team accompanied each participating nursing department and scientifically evaluated the results of the project. Aim: The aim of this qualitative sub-study was to identify critical success factors of this project. Method: In four focus groups, participants discussed their experience of the project. Recruitment was performed from departments assessing the impact of the project retrospectively either positive or critical. In addition, the degree of direct involvement in the project served as a distinguishing criterion. Results: While the degree of direct involvement in the project was not decisive, conflicting opinions and experiences appeared in the groups with more positive or critical project evaluation. Transparency, context and attitude proved critical for the project’s success. Conclusions: Project managers should ensure transparency of the project’s progress and matching of the project structure with local conditions in order to support participants in their critical or positive attitude towards the project.

  3. An Energy-Efficient Cluster-Based Vehicle Detection on Road Network Using Intention Numeration Method

    Directory of Open Access Journals (Sweden)

    Deepa Devasenapathy

    2015-01-01

    Full Text Available The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

  4. An energy-efficient cluster-based vehicle detection on road network using intention numeration method.

    Science.gov (United States)

    Devasenapathy, Deepa; Kannan, Kathiravan

    2015-01-01

    The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

  5. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  6. Optimisation of key performance measures in air cargo demand management

    Directory of Open Access Journals (Sweden)

    Alexander May

    2014-04-01

    Full Text Available This article sought to facilitate the optimisation of key performance measures utilised for demand management in air cargo operations. The focus was on the Revenue Management team at Virgin Atlantic Cargo and a fuzzy group decision-making method was used. Utilising intelligent fuzzy multi-criteria methods, the authors generated a ranking order of ten key outcome-based performance indicators for Virgin Atlantic air cargo Revenue Management. The result of this industry-driven study showed that for Air Cargo Revenue Management, ‘Network Optimisation’ represents a critical outcome-based performance indicator. This collaborative study contributes to existing logistics management literature, especially in the area of Revenue Management, and it seeks to enhance Revenue Management practice. It also provides a platform for Air Cargo operators seeking to improve reliability values for their key performance indicators as a means of enhancing operational monitoring power.

  7. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  8. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  9. High-efficiency power transfer for silicon-based photonic devices

    Science.gov (United States)

    Son, Gyeongho; Yu, Kyoungsik

    2018-02-01

    We demonstrate an efficient coupling of guided light of 1550 nm from a standard single-mode optical fiber to a silicon waveguide using the finite-difference time-domain method and propose a fabrication method of tapered optical fibers for efficient power transfer to silicon-based photonic integrated circuits. Adiabatically-varying fiber core diameters with a small tapering angle can be obtained using the tube etching method with hydrofluoric acid and standard single-mode fibers covered by plastic jackets. The optical power transmission of the fundamental HE11 and TE-like modes between the fiber tapers and the inversely-tapered silicon waveguides was calculated with the finite-difference time-domain method to be more than 99% at a wavelength of 1550 nm. The proposed method for adiabatic fiber tapering can be applied in quantum optics, silicon-based photonic integrated circuits, and nanophotonics. Furthermore, efficient coupling within the telecommunication C-band is a promising approach for quantum networks in the future.

  10. Contribution à l'optimisation de la conduite des procédés alimentaires

    OpenAIRE

    Olmos-Perez , Alejandra

    2003-01-01

    The main objective of this work is the development of a methodology to calculate the optimal operating conditions applicable in food processes control. In the first part of this work, we developed an optimization strategy in two stages. Firstly, the optimisation problem is constructed. Afterwards, a feasible optimisation method is chosen to solve the problem. This choice is made through a decisional diagram, which proposes a deterministic (sequential quadratic programming, SQP), a stochastic ...

  11. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  12. Some Current Problems in Optimisation of Radiation Protection System

    International Nuclear Information System (INIS)

    Franic, Z.; Prlic, I.

    2001-01-01

    Full text: The current system of radiation protection is generally based on recommendations promulgated in the International Commission on Radiological Protection (ICRP) publication 60. These principles and recommendations were subsequently adopted by the International Atomic Energy Agency (IAEA) in International Basic Safety Standards for Protection against Ionising Radiation and for the Safety of Radiation Sources (BSS). However, in recent years certain problems have arisen such as application of risk factors at low doses, use and interpretation of a collective dose, concept of dose commitment, optimisation of all types of occupational exposure and practices, implementation of ALARA approach in the common occupational as well as in quite complex situations etc. In this paper are presented some of the issues that have to be addressed in the development of the new ICRP Recommendations that are planned to be developed in next four or five years. As the new radiation protection philosophy shifts from society-based control of stochastic risks to an individual-based policy, consequently it will require introduction of modified approach to optimisation process and probably introduction of some new dosimetric quantities. (author)

  13. IT support of energy-sensitive product development. Energy-efficient product and process innovations in production engineering. Virtual product development for energy-efficient products and processes; IT-Unterstuetzung zur energiesensitiven Produktentwicklung. Energieeffiziente Produkt- und Prozessinnovationen in der Produktionstechnik. Handlungsfeld virtuelle Produktentwicklung fuer energieeffiziente Produkte und Prozesse (PE)

    Energy Technology Data Exchange (ETDEWEB)

    Reichel, Thomas; Ruenger, Gudula; Steger, Daniel; Xu, Haibin

    2010-07-07

    The development of low-cost, energy-saving and resources-saving products is increasingly important. Thecalculation of the life cycle cost is an important basis for this. For this, it is necessary to extract empirical, decision-relevant data from IT systems of product development (e.g. product data management systems) and operation (e.g. enterprise resource planning systems), and to give the planner appropriate methods for data aggregation. Life cycle data are particularly important for optimising energy efficiency, which may be achieved either by enhanced productivity at constant energy consumption or by reduced energy consumption at constant productivity. The report presents an IT view of the product development process. First, modern methods of product development are analysed including IT support and IT systems. Requirements on IT systems are formulated which enable energy efficiency assessment and optimisation in all phases of product development on the basis of the IT systems used. IT systems for energy-sensitive product development will support the construction engineer in the development of energy-efficient products. For this, the functionalities of existing PDM systems must be enhanced by methods of analysis, synthesis and energy efficiency assessment. Finally, it is shown how the methods for analyzing energy-relevant data can be integrated in the work flow.

  14. The open-pit truck dispatching method based on the completion of production target and the truck flow saturation

    Energy Technology Data Exchange (ETDEWEB)

    Xing, J.; Sun, X. [Northeastern University, Shenyang (China)

    2007-05-15

    To address current problems in the 'modular dispatch' dynamic programming system widely used in open-pit truck real-time dispatching, two concepts for meeting production targets and truck flow saturation were proposed. Using truck flow programming and taking into account stochastic factors and transportation distance, truck real-time dispatching was optimised. The method is applicable to both shovel-truck match and mismatching and also to empty and heavy truck dispatching. In an open-pit mine the production efficiency could be increased by between 8% and 18%. 6 refs.

  15. Starch/polyester films: simultaneous optimisation of the properties for the production of biodegradable plastic bags

    OpenAIRE

    Olivato, J. B.; Grossmann, M. V. E.; Bilck, A. P.; Yamashita, F.; Oliveira, L. M.

    2013-01-01

    Blends of starch/polyester have been of great interest in the development of biodegradable packaging. A method based on multiple responses optimisation (Desirability) was used to evaluate the properties of tensile strength, perforation force, elongation and seal strength of cassava starch/poly(butylene adipate-co-terephthalate) (PBAT) blown films produced via a one-step reactive extrusion using tartaric acid (TA) as a compatibiliser. Maximum results for all the properties were set as more des...

  16. Comparing and Optimising Parallel Haskell Implementations for Multicore Machines

    DEFF Research Database (Denmark)

    Berthold, Jost; Marlow, Simon; Hammond, Kevin

    2009-01-01

    In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects running on multicore machines. GpH and Eden are both constructed using the highly-optimising sequential GHC compiler, and share thread scheduling, and other elements, from a common code base. The ...

  17. Measurement of energy efficiency based on economic foundations

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.

    2015-01-01

    Energy efficiency policy is seen as a very important activity by almost all policy makers. In practical energy policy analysis, the typical indicator used as a proxy for energy efficiency is energy intensity. However, this simple indicator is not necessarily an accurate measure given changes in energy intensity are a function of changes in several factors as well as ‘true’ energy efficiency; hence, it is difficult to make conclusions for energy policy based upon simple energy intensity measures. Related to this, some published academic papers over the last few years have attempted to use empirical methods to measure the efficient use of energy based on the economic theory of production. However, these studies do not generally provide a systematic discussion of the theoretical basis nor the possible parametric empirical approaches that are available for estimating the level of energy efficiency. The objective of this paper, therefore, is to sketch out and explain from an economic perspective the theoretical framework as well as the empirical methods for measuring the level of energy efficiency. Additionally, in the second part of the paper, some of the empirical studies that have attempted to measure energy efficiency using such an economics approach are summarized and discussed.

  18. Modified cuckoo search: A new gradient free optimisation algorithm

    International Nuclear Information System (INIS)

    Walton, S.; Hassan, O.; Morgan, K.; Brown, M.R.

    2011-01-01

    Highlights: → Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. → MCS shows a high convergence rate, able to outperform other optimisers. → MCS is particularly strong at high dimension objective functions. → MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  19. Optimisation of high-performance liquid chromatography with diode array detection using an automatic peak tracking procedure based on augmented iterative target transformation factor analysis

    NARCIS (Netherlands)

    van Zomeren, Paul; Hoogvorst, A.; Coenegracht, P.M J; de Jong, G.J.

    2004-01-01

    An automated method for the optimisation of high-performance liquid chromatography is developed. First of all, the sample of interest is analysed with various eluent compositions. All obtained data are combined into one augmented data matrix. Subsequently, augmented iterative target transformation

  20. Removal of Cr(VI) from aqueous solutions by a bacterial biofilm supported on zeolite: optimisation of the operational conditions and Scale-Up of the bioreactor

    Energy Technology Data Exchange (ETDEWEB)

    Pazos, M. [IBB - Instituto de Biotecnologia e Bioengenharia, Centro de Engenharia Biologica, Universidade do Minho, Braga (Portugal); Departamento de Ingenieria Quimica, Universidade de Vigo, Vigo (Spain); Branco, M.; Tavares, T. [IBB - Instituto de Biotecnologia e Bioengenharia, Centro de Engenharia Biologica, Universidade do Minho, Braga (Portugal); Neves, I.C. [Departamento de Quimica, Centro de Quimica, Universidade do Minho, Braga (Portugal); Sanroman, M.A. [Departamento de Ingenieria Quimica, Universidade de Vigo, Vigo (Spain)

    2010-12-15

    The aim of this study was to investigate the feasibility of a bioreactor system and its scale-up to remove Cr(VI) from solution. The bioreactor is based on an innovative process that combines bioreduction of Cr(VI) to Cr(III) by the bacterium Arthrobacter viscosus and Cr(III) sorption by a specific zeolite. Batch studies were conducted in a laboratory-scale bioreactor, taking into account different operating conditions. Several variables, such as biomass concentration, pH and zeolite pre-treatment, were evaluated to increase removal efficiency. The obtained results suggest that the Cr removal efficiency is improved when the initial biomass concentration is approximately 5 g L{sup -1} and the pH in the system is maintained at an acidic level. Under the optimised conditions, approximately 100 % of the Cr(VI) was removed. The scale-up of the developed biofilm process operating under the optimised conditions was satisfactorily tested in a 150-L bioreactor. (Copyright copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  1. Development of an optimised 1:1 physiotherapy intervention post first-time lumbar discectomy: a mixed-methods study

    Science.gov (United States)

    Rushton, A; White, L; Heap, A; Heneghan, N; Goodwin, P

    2016-01-01

    Objectives To develop an optimised 1:1 physiotherapy intervention that reflects best practice, with flexibility to tailor management to individual patients, thereby ensuring patient-centred practice. Design Mixed-methods combining evidence synthesis, expert review and focus groups. Setting Secondary care involving 5 UK specialist spinal centres. Participants A purposive panel of clinical experts from the 5 spinal centres, comprising spinal surgeons, inpatient and outpatient physiotherapists, provided expert review of the draft intervention. Purposive samples of patients (n=10) and physiotherapists (n=10) (inpatient/outpatient physiotherapists managing patients with lumbar discectomy) were invited to participate in the focus groups at 1 spinal centre. Methods A draft intervention developed from 2 systematic reviews; a survey of current practice and research related to stratified care was circulated to the panel of clinical experts. Lead physiotherapists collaborated with physiotherapy and surgeon colleagues to provide feedback that informed the intervention presented at 2 focus groups investigating acceptability to patients and physiotherapists. The focus groups were facilitated by an experienced facilitator, recorded in written and tape-recorded forms by an observer. Tape recordings were transcribed verbatim. Data analysis, conducted by 2 independent researchers, employed an iterative and constant comparative process of (1) initial descriptive coding to identify categories and subsequent themes, and (2) deeper, interpretive coding and thematic analysis enabling concepts to emerge and overarching pattern codes to be identified. Results The intervention reflected best available evidence and provided flexibility to ensure patient-centred care. The intervention comprised up to 8 sessions of 1:1 physiotherapy over 8 weeks, starting 4 weeks postsurgery. The intervention was acceptable to patients and physiotherapists. Conclusions A rigorous process informed an

  2. An investigation into the application of modern heuristic optimisation techniques to problems in power and processing utilities

    International Nuclear Information System (INIS)

    Dahal, Keshav Prasad

    2000-01-01

    The work contained in this thesis demonstrates that there is a significant requirement for the development and application of new optimisation techniques for solving industrial scheduling problems, in order to achieve a better schedule with significant economic and operational impact. An investigation of how modern heuristic approaches, such as genetic algorithm (GA), simulated annealing (SA), fuzzy logic and hybrids of these techniques, may be developed, designed and implemented appropriately for solving short term and long term NP-hard scheduling problems that exist in electric power utilities and process facilities. GA and SA based methods are developed for generator maintenance scheduling using a novel integer encoding and appropriate GA and SA operators. Three hybrid approaches (an inoculated GA, a GA/SA and a GA with fuzzy logic) are proposed in order to improve the solution performance, and to take advantage of any flexibilities inherent in the problem. Five different GA-based approaches are investigated for solving the generation scheduling problem. Of those, a knowledge-based hybrid GA approach achieves better solutions in a shorter computational time. This approach integrates problem specific knowledge, heuristic dispatch calculation and linear programming within the GA-framework. The application of a GA-based methodology is proposed for the scheduling of storage tanks of a water treatment facility. The proposed approach is an integration of a GA and a heuristic rule-base. The GA string considers the tank allocation problem, and the heuristic approach solves the rate determination problems within the framework of the GA. For optimising the schedule of operations of a bulk handling port facility, a generic modelling tool is developed characterising the operational and maintenance activities of the facility. A GA-based approach is integrated with the simulation software for optimising the scheduling of operations of the facility. Each of these approaches is

  3. Design optimisation of a flywheel hybrid vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Kok, D.B.

    1999-11-04

    This thesis describes the design optimisation of a flywheel hybrid vehicle with respect to fuel consumption and exhaust gas emissions. The driveline of this passenger car uses two power sources: a small spark ignition internal combustion engine with three-way catalyst, and a highspeed flywheel system for kinetic energy storage. A custom-made continuously variable transmission (CVT) with so-called i{sup 2} control transports energy between these power sources and the vehicle wheels. The driveline includes auxiliary systems for hydraulic, vacuum and electric purposes. In this fully mechanical driveline, parasitic energy losses determine the vehicle's fuel saving potential to a large extent. Practicable energy loss models have been derived to quantify friction losses in bearings, gearwheels, the CVT, clutches and dynamic seals. In addition, the aerodynamic drag in the flywheel system and power consumption of auxiliaries are charted. With the energy loss models available, a calculation procedure is introduced to optimise the flywheel as a subsystem in which the rotor geometry, the safety containment, and the vacuum system are designed for minimum energy use within the context of automotive applications. A first prototype of the flywheel system was tested experimentally and subsequently redesigned to improve rotordynamics and safety aspects. Coast-down experiments with the improved version show that the energy losses have been lowered significantly. The use of a kinetic energy storage device enables the uncoupling of vehicle wheel power and engine power. Therefore, the engine can be smaller and it can be chosen to operate in its region of best efficiency in start-stop mode. On a test-rig, the measured engine fuel consumption was reduced with more than 30 percent when the engine is intermittently restarted with the aid of the flywheel system. Although the start-stop mode proves to be advantageous for fuel consumption, exhaust gas emissions increase temporarily

  4. Comprehensive optimisation of China’s energy prices, taxes and subsidy policies based on the dynamic computable general equilibrium model

    International Nuclear Information System (INIS)

    He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.

    2015-01-01

    Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved

  5. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  6. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  7. Model-Free Trajectory Optimisation for Unmanned Aircraft Serving as Data Ferries for Widespread Sensors

    Directory of Open Access Journals (Sweden)

    Ben Pearre

    2012-10-01

    Full Text Available Given multiple widespread stationary data sources such as ground-based sensors, an unmanned aircraft can fly over the sensors and gather the data via a wireless link. Performance criteria for such a network may incorporate costs such as trajectory length for the aircraft or the energy required by the sensors for radio transmission. Planning is hampered by the complex vehicle and communication dynamics and by uncertainty in the locations of sensors, so we develop a technique based on model-free learning. We present a stochastic optimisation method that allows the data-ferrying aircraft to optimise data collection trajectories through an unknown environment in situ, obviating the need for system identification. We compare two trajectory representations, one that learns near-optimal trajectories at low data requirements but that fails at high requirements, and one that gives up some performance in exchange for a data collection guarantee. With either encoding the ferry is able to learn significantly improved trajectories compared with alternative heuristics. To demonstrate the versatility of the model-free learning approach, we also learn a policy to minimise the radio transmission energy required by the sensor nodes, allowing prolonged network lifetime.

  8. Advances in the optimisation of apparel heating products: A numerical approach to study heat transport through a blanket with an embedded smart heating system

    International Nuclear Information System (INIS)

    Neves, S.F.; Couto, S.; Campos, J.B.L.M.; Mayor, T.S.

    2015-01-01

    The optimisation of the performance of products with smart/active functionalities (e. g. in protective clothing, home textiles products, automotive seats, etc.) is still a challenge for manufacturers and developers. The aim of this study was to optimise the thermal performance of a heating product by a numerical approach, by analysing several opposing requirements and defining solutions for the identified limitations, before the construction of the first prototype. A transfer model was developed to investigate the transport of heat from the skin to the environment, across a heating blanket with an embedded smart heating system. Several parameters of the textile material and of the heating system were studied, in order to optimise the thermal performance of the heating blanket. Focus was put on the effects of thickness and thermal conductivity of each layer, and on parameters associated with the heating elements, e.g. position of the heating wires relative to the skin, distance between heating wires, applied heating power, and temperature range for operation of the heating system. Furthermore, several configurations of the blanket (and corresponding heating powers) were analysed in order to minimise the heat loss from the body to the environment, and the temperature distribution along the skin. The results show that, to ensure an optimal compromise between the thermal performance of the product and the temperature oscillation along its surface, the distance between the wires should be small (and not bigger than 50 mm), and each layer of the heating blanket should have a specific thermal resistance, based on the expected external conditions during use and the requirements of the heating system (i.e. requirements regarding energy consumption/efficiency and capacity to effectively regulate body exchanges with surrounding environment). The heating system should operate in an ON/OFF mode based on the body heating needs and within a temperature range specified based on

  9. 3D based integrated support concept for improving safety and cost-efficiency of nuclear decommissioning projects

    International Nuclear Information System (INIS)

    Szoeke, Istvan

    2016-01-01

    extensive rework during the decommissioning phase. 3D technologies have the potential for minimising knowledge loss during the transition to decommissioning, and support efficient reconstruction of design and other knowledge supporting more optimised decommissioning strategies. Application of advanced 3D visualisation technologies are applied for planning the manipulation of heavy large components in decommissioning projects. With the decreasing time and cost investment required for application of 3D simulation and AR technology, an increasing number of experts are exploring the possibilities in using such methods for supporting on-site logistics (categorisation, transportation, and temporary storage) of contaminated and activated components during decommissioning. 3D radiological simulation and visualisation technology provides a new more efficient way for explaining complex radiological conditions, work plans, and compliance with regulatory requirements. Advanced support systems based on 3D technologies have successfully been applied in the decommissioning of a number of nuclear installations (e.g. Fugen NPP, Chernobyl NPP, Leningrad NPP, Andreeva Bay branch of Northwest Center for Radioactive Waste Management in NW Russia) for increasing safety and optimising costs. For further details the reader is referred to the literature

  10. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.

    2015-01-07

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  11. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  12. Development and optimisation by means of sensory analysis of new beverages based on different fruit juices and sherry wine vinegar.

    Science.gov (United States)

    Cejudo-Bastante, María Jesús; Rodríguez Dodero, M Carmen; Durán Guerrero, Enrique; Castro Mejías, Remedios; Natera Marín, Ramón; García Barroso, Carmelo

    2013-03-15

    Despite the long history of sherry wine vinegar, new alternatives of consumption are being developed, with the aim of diversifying its market. Several new acetic-based fruit juices have been developed by optimising the amount of sherry wine vinegar added to different fruit juices: apple, peach, orange and pineapple. Once the concentrations of wine vinegar were optimised by an expert panel, the aforementioned new acetic fruit juices were tasted by 86 consumers. Three different aspects were taken into account: habits of consumption of vinegar and fruit juices, gender and age. Based on the sensory analysis, 50 g kg(-1) of wine vinegar was the optimal and preferred amount of wine vinegar added to the apple, orange and peach juices, whereas 10 g kg(-1) was the favourite for the pineapple fruit. Based on the olfactory and gustatory impression, and 'purchase intent', the acetic beverages made from peach and pineapple juices were the most appreciated, followed by apple juice, while those obtained from orange juice were the least preferred by consumers. New opportunities for diversification of the oenological market could be possible as a result of the development of this type of new product which can be easily developed by any vinegar or fruit juice maker company. © 2012 Society of Chemical Industry.

  13. Cooled solar PV panels for output energy efficiency optimisation

    International Nuclear Information System (INIS)

    Peng, Zhijun; Herfatmanesh, Mohammad R.; Liu, Yiming

    2017-01-01

    Highlights: • Effects of cooling on solar PV performance have been experimentally investigated. • As a solar panel is cooled down, the electric output can have significant increase. • A cooled solar PV system has been proposed for resident application. • Life cycle assessment suggests the cost payback time of cooled PV can be reduced. - Abstract: As working temperature plays a critical role in influencing solar PV’s electrical output and efficacy, it is necessary to examine possible way for maintaining the appropriate temperature for solar panels. This research is aiming to investigate practical effects of solar PV surface temperature on output performance, in particular efficiency. Experimental works were carried out under different radiation condition for exploring the variation of the output voltage, current, output power and efficiency. After that, the cooling test was conducted to find how much efficiency improvement can be achieved with the cooling condition. As test results show the efficiency of solar PV can have an increasing rate of 47% with the cooled condition, a cooling system is proposed for possible system setup of residential solar PV application. The system performance and life cycle assessment suggest that the annual PV electric output efficiencies can increase up to 35%, and the annual total system energy efficiency including electric output and hot water energy output can increase up to 107%. The cost payback time can be reduced to 12.1 years, compared to 15 years of the baseline of a similar system without cooling sub-system.

  14. Efficient learning strategy of Chinese characters based on network approach.

    Directory of Open Access Journals (Sweden)

    Xiaoyong Yan

    Full Text Available We develop an efficient learning strategy of Chinese characters based on the network of the hierarchical structural relations between Chinese characters. A more efficient strategy is that of learning the same number of useful Chinese characters in less effort or time. We construct a node-weighted network of Chinese characters, where character usage frequencies are used as node weights. Using this hierarchical node-weighted network, we propose a new learning method, the distributed node weight (DNW strategy, which is based on a new measure of nodes' importance that considers both the weight of the nodes and its location in the network hierarchical structure. Chinese character learning strategies, particularly their learning order, are analyzed as dynamical processes over the network. We compare the efficiency of three theoretical learning methods and two commonly used methods from mainstream Chinese textbooks, one for Chinese elementary school students and the other for students learning Chinese as a second language. We find that the DNW method significantly outperforms the others, implying that the efficiency of current learning methods of major textbooks can be greatly improved.

  15. Optimising performance in steady state for a supermarket refrigeration system

    DEFF Research Database (Denmark)

    Green, Torben; Kinnaert, Michel; Razavi-Far, Roozbeh

    2012-01-01

    Using a supermarket refrigeration system as an illustrative example, the paper postulates that by appropriately utilising knowledge of plant operation, the plant wide performance can be optimised based on a small set of variables. Focusing on steady state operations, the total system performance...

  16. An efficient method for sampling the essential subspace of proteins

    NARCIS (Netherlands)

    Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.

    A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of

  17. Method for determining efficiency in a liquid scintillation system

    International Nuclear Information System (INIS)

    Laney, B.H.

    1975-01-01

    In a liquid scintillation system utilizing plural photomultiplyier means, a method for determining efficiency of coincident pulse detection. Various incremental counting efficiency levels are associated with asymptotic functions in a two dimension matrix in which the abscissa and ordinate correspond to the pulse heights of each of a pair of coincident pulses from different photomultiplier means. An efficiency determining point is located in the matrix based on the sum of the pulse heights of each of the coincident pulses as well as on the amplitude of the smallest pulse of the coincident pulses. The single counting efficiency determining point is recorded as the level of efficiency at which the photomultiplier means detect scintillations that generate coincident pulses having pulse heights equal to those recorded. (Patent Office Record)

  18. Is there evidence of optimisation for carbon efficiency in plant proteomes?

    KAUST Repository

    Jankovic, Boris R.

    2011-07-25

    Flowering plants, angiosperms, can be divided into two major clades, monocots and dicots, and while differences in amino acid composition in different species from the two clades have been reported, a systematic analysis of amino acid content and distribution remains outstanding. Here, we show that monocot and dicot proteins have developed distinct amino acid content. In Arabidopsis thaliana and poplar, as in the ancestral moss Physcomitrella patens, the average mass per amino acid appears to be independent of protein length, while in the monocots rice, maize and sorghum, shorter proteins tend to be made of lighter amino acids. An examination of the elemental content of these proteomes reveals that the difference between monocot and dicot proteins can be largely attributed to their different carbon signatures. In monocots, the shorter proteins, which comprise the majority of all proteins, are made of amino acids with less carbon, while the nitrogen content is unchanged in both monocots and dicots. We hypothesise that this signature could be the result of carbon use and energy optimisation in fast-growing annual Poaceae (grasses). © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  19. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    DEFF Research Database (Denmark)

    Helle, K.B.; Müller, T.O.; Astrup, Poul

    2014-01-01

    of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64......Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often...... source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given...

  20. Optimisation and application of electrochemical techniques for high temperature aqueous environments

    International Nuclear Information System (INIS)

    Bojinov, M.; Laitinen, B. T.; Maekelae, K.; Maekelae, M; Saario, T.; Sirkiae, P.; Beverskog, B.

    1999-01-01

    Different localised corrosion phenomena may pose a serious hazard to construction materials employed in high-temperature aqueous environments. The operating temperatures in electric power production have been increased to improve plant efficiencies. This has lead to the demand for new, further improved engineering materials. The applicability of these materials in the operating power plant environments largely depends on the existence of a protective surface oxide film. Extensive rupture of these films can lead to increased reaction of the underlying metal with environment. Therefore by modifying the composition of the base metal the properties of the surface oxides can be optimised to withstand the new operational environments of interest. To mitigate the risk of detrimental corrosion phenomena of structural materials, mechanistic understanding of the contributing processes is required. This calls for more experimental information and necessitates the development of new experimental techniques and procedures capable of operating in situ in high temperature aqueous environments. The low conductivity of the aqueous medium complicates electrochemical studies on construction and fuel cladding materials carried out in simulated LWR coolant conditions or in actual plant conditions, especially in typical BWR environments. To obtain useful information of reactions and transport processes occurring on and within oxide films on different materials, an electrochemical arrangement based on a thin-layer electrode (TLEC) concept was developed. In this presentation the main results are shown from work carried out to optimise further the geometry of the TLEC arrangement and to propose recommendations for how to use this arrangement in different low-conductivity environments. Results will be also given from the test in which the TLEC arrangement was equipped with a detector electrode. The detector electrode allows detecting soluble products and reaction intermediates at

  1. A model-based combinatorial optimisation approach for energy-efficient processing of microalgae

    NARCIS (Netherlands)

    Slegers, P.M.; Koetzier, B.J.; Fasaei, F.; Wijffels, R.H.; Straten, van G.; Boxtel, van A.J.B.

    2014-01-01

    The analyses of algae biorefinery performance are commonly based on fixed performance data for each processing step. In this work, we demonstrate a model-based combinatorial approach to derive the design-specific upstream energy consumption and biodiesel yield in the production of biodiesel from

  2. An Efficient Upscaling Procedure Based on Stokes-Brinkman Model and Discrete Fracture Network Method for Naturally Fractured Carbonate Karst Reservoirs

    KAUST Repository

    Qin, Guan; Bi, Linfeng; Popov, Peter; Efendiev, Yalchin; Espedal, Magne

    2010-01-01

    , fractures and their interconnectivities in coarse-scale simulation models. In this paper, we present a procedure based on our previously proposed Stokes-Brinkman model (SPE 125593) and the discrete fracture network method for accurate and efficient upscaling

  3. Economical Efficiency of Combined Cooling Heating and Power Systems Based on an Enthalpy Method

    Directory of Open Access Journals (Sweden)

    Yan Xu

    2017-11-01

    Full Text Available As the living standards of Chinese people have been improving, the energy demand for cooling and heating, mainly in the form of electricity, has also expanded. Since an integrated cooling, heating and power supply system (CCHP will serve this demand better, the government is now attaching more importance to the application of CCHP energy systems. Based on the characteristics of the combined cooling heating and power supply system, and the method of levelized cost of energy, two calculation methods for the evaluation of the economical efficiency of the system are employed when the energy production in the system is dealt with from the perspective of exergy. According to the first method, fuel costs account for about 75% of the total cost. In the second method, the profits from heating and cooling are converted to fuel costs, resulting in a significant reduction of fuel costs, accounting for 60% of the total cost. Then the heating and cooling parameters of gas turbine exhaust, heat recovery boiler, lithium-bromide heat-cooler and commercial tariff of provincial capitals were set as benchmark based on geographic differences among provinces, and the economical efficiency of combined cooling heating and power systems in each province were evaluated. The results shows that the combined cooling heating and power system is economical in the developed areas of central and eastern China, especially in Hubei and Zhejiang provinces, while in other regions it is not. The sensitivity analysis was also made on related influencing factors of fuel cost, demand intensity in heating and cooling energy, and bank loans ratio. The analysis shows that the levelized cost of energy of combined cooling heating and power systems is very sensitive to exergy consumption and fuel costs. When the consumption of heating and cooling energy increases, the unit cost decreases by 0.1 yuan/kWh, and when the on-grid power ratio decreases by 20%, the cost may increase by 0.1 yuan

  4. Efficiency of two-phase methods with focus on a planned population-based case-control study on air pollution and stroke

    Directory of Open Access Journals (Sweden)

    Strömberg Ulf

    2007-11-01

    Full Text Available Abstract Background We plan to conduct a case-control study to investigate whether exposure to nitrogen dioxide (NO2 increases the risk of stroke. In case-control studies, selective participation can lead to bias and loss of efficiency. A two-phase design can reduce bias and improve efficiency by combining information on the non-participating subjects with information from the participating subjects. In our planned study, we will have access to individual disease status and data on NO2 exposure on group (area level for a large population sample of Scania, southern Sweden. A smaller sub-sample will be selected to the second phase for individual-level assessment on exposure and covariables. In this paper, we simulate a case-control study based on our planned study. We develop a two-phase method for this study and compare the performance of our method with the performance of other two-phase methods. Methods A two-phase case-control study was simulated with a varying number of first- and second-phase subjects. Estimation methods: Method 1: Effect estimation with second-phase data only. Method 2: Effect estimation by adjusting the first-phase estimate with the difference between the adjusted and unadjusted second-phase estimate. The first-phase estimate is based on individual disease status and residential address for all study subjects that are linked to register data on NO2-exposure for each geographical area. Method 3: Effect estimation by using the expectation-maximization (EM algorithm without taking area-level register data on exposure into account. Method 4: Effect estimation by using the EM algorithm and incorporating group-level register data on NO2-exposure. Results The simulated scenarios were such that, unbiased or marginally biased ( Conclusion In the setting described here, method 4 had the best performance in order to improve efficiency, while adjusting for varying participation rates across areas.

  5. Spatial-structural interaction and strain energy structural optimisation

    NARCIS (Netherlands)

    Hofmeyer, H.; Davila Delgado, J.M.; Borrmann, A.; Geyer, P.; Rafiq, Y.; Wilde, de P.

    2012-01-01

    A research engine iteratively transforms spatial designs into structural designs and vice versa. Furthermore, spatial and structural designs are optimised. It is suggested to optimise a structural design by evaluating the strain energy of its elements and by then removing, adding, or changing the

  6. Profile control studies for JET optimised shear regime

    Energy Technology Data Exchange (ETDEWEB)

    Litaudon, X.; Becoulet, A.; Eriksson, L.G.; Fuchs, V.; Huysmans, G.; How, J.; Moreau, D.; Rochard, F.; Tresset, G.; Zwingmann, W. [Association Euratom-CEA, CEA/Cadarache, Dept. de Recherches sur la Fusion Controlee, DRFC, 13 - Saint-Paul-lez-Durance (France); Bayetti, P.; Joffrin, E.; Maget, P.; Mayorat, M.L.; Mazon, D.; Sarazin, Y. [JET Abingdon, Oxfordshire (United Kingdom); Voitsekhovitch, I. [Universite de Provence, LPIIM, Aix-Marseille 1, 13 (France)

    2000-03-01

    This report summarises the profile control studies, i.e. preparation and analysis of JET Optimised Shear plasmas, carried out during the year 1999 within the framework of the Task-Agreement (RF/CEA/02) between JET and the Association Euratom-CEA/Cadarache. We report on our participation in the preparation of the JET Optimised Shear experiments together with their comprehensive analyses and the modelling. Emphasis is put on the various aspects of pressure profile control (core and edge pressure) together with detailed studies of current profile control by non-inductive means, in the prospects of achieving steady, high performance, Optimised Shear plasmas. (authors)

  7. Optimisation of radiation protection

    International Nuclear Information System (INIS)

    1988-01-01

    Optimisation of radiation protection is one of the key elements in the current radiation protection philosophy. The present system of dose limitation was issued in 1977 by the International Commission on Radiological Protection (ICRP) and includes, in addition to the requirements of justification of practices and limitation of individual doses, the requirement that all exposures be kept as low as is reasonably achievable, taking social and economic factors into account. This last principle is usually referred to as optimisation of radiation protection, or the ALARA principle. The NEA Committee on Radiation Protection and Public Health (CRPPH) organised an ad hoc meeting, in liaison with the NEA committees on the safety of nuclear installations and radioactive waste management. Separate abstracts were prepared for individual papers presented at the meeting

  8. Radiation dose optimisation for conventional imaging in infants and newborns using automatic dose management software: an application of the new 2013/59 EURATOM directive.

    Science.gov (United States)

    Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E

    2018-04-09

    Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.

  9. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  10. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  11. TEM turbulence optimisation in stellarators

    Science.gov (United States)

    Proll, J. H. E.; Mynick, H. E.; Xanthopoulos, P.; Lazerson, S. A.; Faber, B. J.

    2016-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is addressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X (Beidler et al 1990 Fusion Technol. 17 148) and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT (Spong et al 2001 Nucl. Fusion 41 711) code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stellarator experiment HSX (Anderson et al 1995 Fusion Technol. 27 273) is presented for which a reduction of the linear growth rates is achieved over a broad range of the operational parameter space. As an important consequence of this property, the turbulent heat flux levels are reduced compared with the initial configuration.

  12. Optimisation of selective breeding program for Nile tilapia (Oreochromis niloticus)

    NARCIS (Netherlands)

    Trong, T.Q.

    2013-01-01

    The aim of this thesis was to optimise the selective breeding program for Nile tilapia in the Mekong Delta region of Vietnam. Two breeding schemes, the “classic” BLUP scheme following the GIFT method (with pair mating) and a rotational mating scheme with own performance selection and

  13. A review of patient dose and optimisation methods in adult and paediatric CT scanning

    International Nuclear Information System (INIS)

    Dougeni, E.; Faulkner, K.; Panayiotakis, G.

    2012-01-01

    Highlights: ► CT scanning frequency has grown with the development of new clinical applications. ► Up to 32-fold dose variation was observed for similar type of procedures. ► Scanning parameters should be optimised for patient size and clinical indication. ► Cancer risks knowledge amongst physicians of certain specialties was poor. ► A significant number of non-indicated CT scans could be eliminated. - Abstract: An increasing number of publications and international reports on computed tomography (CT) have addressed important issues on optimised imaging practice and patient dose. This is partially due to recent technological developments as well as to the striking rise in the number of CT scans being requested. CT imaging has extended its role to newer applications, such as cardiac CT, CT colonography, angiography and urology. The proportion of paediatric patients undergoing CT scans has also increased. The published scientific literature was reviewed to collect information regarding effective dose levels during the most common CT examinations in adults and paediatrics. Large dose variations were observed (up to 32-fold) with some individual sites exceeding the recommended dose reference levels, indicating a large potential to reduce dose. Current estimates on radiation-related cancer risks are alarming. CT doses account for about 70% of collective dose in the UK and are amongst the highest in diagnostic radiology, however the majority of physicians underestimate the risk, demonstrating a decreased level of awareness. Exposure parameters are not always adjusted appropriately to the clinical question or to patient size, especially for children. Dose reduction techniques, such as tube-current modulation, low-tube voltage protocols, prospective echocardiography-triggered coronary angiography and iterative reconstruction algorithms can substantially decrease doses. An overview of optimisation studies is provided. The justification principle is discussed along

  14. Sliver Solar Cells: High-Efficiency, Low-Cost PV Technology

    Directory of Open Access Journals (Sweden)

    Evan Franklin

    2007-01-01

    Full Text Available Sliver cells are thin, single-crystal silicon solar cells fabricated using standard fabrication technology. Sliver modules, composed of several thousand individual Sliver cells, can be efficient, low-cost, bifacial, transparent, flexible, shadow tolerant, and lightweight. Compared with current PV technology, mature Sliver technology will need 10% of the pure silicon and fewer than 5% of the wafer starts per MW of factory output. This paper deals with two distinct challenges related to Sliver cell and Sliver module production: providing a mature and robust Sliver cell fabrication method which produces a high yield of highly efficient Sliver cells, and which is suitable for transfer to industry; and, handling, electrically interconnecting, and encapsulating billions of sliver cells at low cost. Sliver cells with efficiencies of 20% have been fabricated at ANU using a reliable, optimised processing sequence, while low-cost encapsulation methods have been demonstrated using a submodule technique.

  15. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  16. Optimisation of efficiency and emissions in pellet burners

    International Nuclear Information System (INIS)

    Eskilsson, David; Roennbaeck, Marie; Samuelsson, Jessica; Tullin, Claes

    2004-01-01

    There is a trade-off between the emissions of nitrogen oxides (NO x ) and of unburnt hydrocarbons and carbon monoxide (OGC and CO). Decreasing the excess air results in lower NO x emission but also increased emission of unburnt. The efficiency increases, as the excess air is decreased until the losses due to incomplete combustion become too high. The often-high NO x emission in today's pellet burners can be significantly reduced using well-known techniques such as air staging. The development of different chemical sensors is very intensive and recently sensors for CO and OGC have been introduced on the market. These sensors may, together with a Lambda sensor, provide efficient control for optimal performance with respect to emissions and efficiency. In this paper, results from an experimental parameter study in a modified commercial burner, followed by Chemkin simulations with relevant input data and experiments in a laboratory reactor and in a prototype burner, are summarised. Critical parameters for minimisation of NO x emission from pellet burners are investigated in some detail. Also, results from tests of a new sensor for unburnt are reported. In conclusion, relatively simple design modifications can significantly decrease NO x emission from today's pellet burners

  17. Isogeometric Analysis and Shape Optimisation

    DEFF Research Database (Denmark)

    Gravesen, Jens; Evgrafov, Anton; Gersborg, Allan Roulund

    of the whole domain. So in every optimisation cycle we need to extend a parametrisation of the boundary of a domain to the whole domain. It has to be fast in order not to slow the optimisation down but it also has to be robust and give a parametrisation of high quality. These are conflicting requirements so we...... will explain how the validity of a parametrisation can be checked and we will describe various ways to parametrise a domain. We will in particular study the Winslow functional which turns out to have some desirable properties. Other problems we touch upon is clustering of boundary control points (design...

  18. Reward-based spatial crowdsourcing with differential privacy preservation

    Science.gov (United States)

    Xiong, Ping; Zhang, Lefeng; Zhu, Tianqing

    2017-11-01

    In recent years, the popularity of mobile devices has transformed spatial crowdsourcing (SC) into a novel mode for performing complicated projects. Workers can perform tasks at specified locations in return for rewards offered by employers. Existing methods ensure the efficiency of their systems by submitting the workers' exact locations to a centralised server for task assignment, which can lead to privacy violations. Thus, implementing crowsourcing applications while preserving the privacy of workers' location is a key issue that needs to be tackled. We propose a reward-based SC method that achieves acceptable utility as measured by task assignment success rates, while efficiently preserving privacy. A differential privacy model ensures rigorous privacy guarantee, and Laplace noise is introduced to protect workers' exact locations. We then present a reward allocation mechanism that adjusts each piece of the reward for a task using the distribution of the workers' locations. Through experimental results, we demonstrate that this optimised-reward method is efficient for SC applications.

  19. Efficient methods of nanoimprint stamp cleaning based on imprint self-cleaning effect

    Energy Technology Data Exchange (ETDEWEB)

    Meng Fantao; Chu Jinkui [Key Laboratory for Micro/Nano Technology and System of Liaoning Province, Dalian University of Technology, 116024 Dalian (China); Luo Gang; Zhou Ye; Carlberg, Patrick; Heidari, Babak [Obducat AB, SE-20125 Malmoe (Sweden); Maximov, Ivan; Montelius, Lars; Xu, H Q [Division of Solid State Physics, Lund University, Box 118, S-22100 Lund (Sweden); Nilsson, Lars, E-mail: ivan.maximov@ftf.lth.se [Department of Food Technology, Engineering and Nutrition, Lund University, Box 117, S-22100 Lund (Sweden)

    2011-05-06

    Nanoimprint lithography (NIL) is a nonconventional lithographic technique that promises low-cost, high-throughput patterning of structures with sub-10 nm resolution. Contamination of nanoimprint stamps is one of the key obstacles to industrialize the NIL technology. Here, we report two efficient approaches for removal of typical contamination of particles and residual resist from stamps: thermal and ultraviolet (UV) imprinting cleaning-both based on the self-cleaning effect of imprinting process. The contaminated stamps were imprinted onto polymer substrates and after demolding, they were treated with an organic solvent. The images of the stamp before and after the cleaning processes show that the two cleaning approaches can effectively remove contamination from stamps without destroying the stamp structures. The contact angles of the stamp before and after the cleaning processes indicate that the cleaning methods do not significantly degrade the anti-sticking layer. The cleaning processes reported in this work could also be used for substrate cleaning.

  20. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.