WorldWideScience

Sample records for global optimisation methods

  1. Resolution of crystal structures by X-ray and neutrons powder diffraction using global optimisation methods; Resolution des structures cristallines par diffraction des rayons X et neutrons sur poudres en utilisant les methodes d'optimisation globale

    Palin, L

    2005-03-15

    We have shown in this work that X-ray diffraction on powder is a powerful tool to analyze crystal structure. The purpose of this thesis is the resolution of crystal structures by X-ray and neutrons diffraction on powder using global optimisation methods. We have studied 3 different topics. The first one is the order-disorder phenomena observed in some globular organic molecular solids. The second is the opiate family of neuropeptides. These neurotransmitters regulate sensory functions including pain and control of respiration in the central nervous system. The aim of our study was to try to determine the crystal structure of Leu-enkephalin and some of its sub-fragments. The determination of the crystal structures has been done performing Monte Carlo simulations. The third one is the location of benzene in a sodium-X zeolite. The zeolite framework was already known and the benzene has been localized by simulated annealing and by the use of maximum entropy maps.

  2. Optimisation of technical specifications using probabilistic methods

    Ericsson, G.; Knochenhauer, M.; Hultqvist, G.

    1986-01-01

    During the last few years the development of methods for modifying and optimising nuclear power plant Technical Specifications (TS) for plant operations has received increased attention. Probalistic methods in general, and the plant and system models of probabilistic safety assessment (PSA) in particular, seem to provide the most forceful tools for optimisation. This paper first gives some general comments on optimisation, identifying important parameters and then gives a description of recent Swedish experiences from the use of nuclear power plant PSA models and results for TS optimisation

  3. Methods for Optimisation of the Laser Cutting Process

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  4. An efficient optimisation method in groundwater resource ...

    DRINIE

    2003-10-04

    Oct 4, 2003 ... theories developed in the field of stochastic subsurface hydrology. In reality, many ... Recently, some researchers have applied the multi-stage ... Then a robust solution of the optimisation problem given by Eqs. (1) to (3) is as ...

  5. Milk bottom-up proteomics: method optimisation.

    Delphine eVincent

    2016-01-01

    Full Text Available Milk is a complex fluid whose proteome displays a diverse set of proteins of high abundance such as caseins and medium to low abundance whey proteins such as ß-lactoglobulin, lactoferrin, immunoglobulins, glycoproteins, peptide hormones and enzymes. A sample preparation method that enables high reproducibility and throughput is key in reliably identifying proteins present or proteins responding to conditions such as a diet, health or genetics. Using skim milk samples from Jersey and Holstein-Friesian cows, we compared three extraction procedures which have not previously been applied to samples of cows’ milk. Method A (urea involved a simple dilution of the milk in a urea-based buffer, method B (TCA/acetone involved a trichloroacetic acid (TCA/acetone precipitation and method C (methanol/chloroform involved a tri-phasic partition method in chloroform/methanol solution. Protein assays, SDS-PAGE profiling, and trypsin digestion followed by nanoHPLC-electrospray ionisation-tandem mass spectrometry (nLC-ESI-MS/MS analyses were performed to assess their efficiency. Replicates were used at each analytical step (extraction, digestion, injection to assess reproducibility. Mass spectrometry (MS data are available via ProteomeXchange with identifier PXD002529. Overall 186 unique accessions, major and minor proteins, were identified with a combination of methods. Method C (methanol/chloroform yielded the best resolved SDS-patterns and highest protein recovery rates, method A (urea yielded the greatest number of accessions, and, of the three procedures, method B (TCA/acetone was the least compatible of all with a wide range of downstream analytical procedures. Our results also highlighted breed differences between the proteins in milk of Jersey and Holstein-Friesian cows.

  6. Method for optimising the energy of pumps

    Skovmose Kallesøe, Carsten; De Persis, Claudio

    2011-01-01

    The method involves determining whether pumps (pu1, pu5) are directly assigned to loads (v1, v3) as pilot pumps (pu2, pu3) and hydraulically connected upstream of the pilot pumps. The upstream pumps are controlled with variable speed for energy optimization. Energy optimization circuits are selected

  7. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  8. Optimal Optimisation in Chemometrics

    Hageman, J.A.

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  9. Optimisation of test and maintenance based on probabilistic methods

    Cepin, M.

    2001-01-01

    This paper presents a method, which based on models and results of probabilistic safety assessment, minimises the nuclear power plant risk by optimisation of arrangement of safety equipment outages. The test and maintenance activities of the safety equipment are timely arranged, so the classical static fault tree models are extended with the time requirements to be capable to model real plant states. A house event matrix is used, which enables modelling of the equipment arrangements through the discrete points of time. The result of the method is determination of such configuration of equipment outages, which result in the minimal risk. Minimal risk is represented by system unavailability. (authors)

  10. Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting

    Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy

    2018-06-01

    Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.

  11. Global performance enhancements via pedestal optimisation on ASDEX Upgrade

    Dunne, M.G.; Frassinetti, L.; Beurkens, M.N.A.; Cavedon, M.; Fietz, S.; Fischer, R.; Giannone, L.; Huijsmans, G.T.A.; Kurzan, B.; Laggner, F.; McCarhty, P.J.; McDermott, R.M.; Tardini, G.; Viezzer, E.; Willensdorfer, M.; Wolfrum, E.

    2017-01-01

    Results of experimental scans of heating power, plasma shape, and nitrogen content are presented, with a focus on global performance and pedestal alteration. In detailed scans at low triangularity, it is shown that the increase in stored energy due to nitrogen seeding stems from the pedestal. It is

  12. Combining local and global optimisation for virtual camera control

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  13. Optimisation-Based Solution Methods for Set Partitioning Models

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  14. Optimisation of Oil Production in Two – Phase Flow Reservoir Using Simultaneous Method and Interior Point Optimiser

    Lerch, Dariusz Michal; Völcker, Carsten; Capolei, Andrea

    2012-01-01

    in the reservoir. A promising decrease of these remained resources can be provided by smart wells applying water injections to sustain satisfactory pressure level in the reservoir throughout the whole process of oil production. Basically to enhance secondary recovery of the remaining oil after drilling, water...... is injected at the injection wells of the down-hole pipes. This sustains the pressure in the reservoir and drives oil towards production wells. There are however, many factors contributing to the poor conventional secondary recovery methods e.g. strong surface tension, heterogeneity of the porous rock...... fields, or closed loop optimisation, can be used for optimising the reservoir performance in terms of net present value of oil recovery or another economic objective. In order to solve an optimal control problem we use a direct collocation method where we translate a continuous problem into a discrete...

  15. Beam position optimisation for IMRT

    Holloway, L.; Hoban, P.

    2001-01-01

    Full text: The introduction of IMRT has not generally resulted in the use of optimised beam positions because to find the global solution of the problem a time consuming stochastic optimisation method must be used. Although a deterministic method may not achieve the global minimum it should achieve a superior dose distribution compared to no optimisation. This study aimed to develop and test such a method. The beam optimisation method developed relies on an iterative process to achieve the desired number of beams from a large initial number of beams. The number of beams is reduced in a 'weeding-out' process based on the total fluence which each beam delivers. The process is gradual, with only three beams removed each time (following a small number of iterations), ensuring that the reduction in beams does not dramatically affect the fluence maps of those remaining. A comparison was made between the dose distributions achieved when the beams positions were optimised in this fashion and when the beams positions were evenly distributed. The method has been shown to work quite effectively and efficiently. The Figure shows a comparison in dose distribution with optimised and non optimised beam positions for 5 beams. It can be clearly seen that there is an improvement in the dose distribution delivered to the tumour and a reduction in the dose to the critical structure with beam position optimisation. A method for beam position optimisation for use in IMRT optimisations has been developed. This method although not necessarily achieving the global minimum in beam position still achieves quite a dramatic improvement compared with no beam position optimisation and is very efficiently achieved. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  16. Metal Removal Process Optimisation using Taguchi Method - Simplex Algorithm (TM-SA) with Case Study Applications

    Ajibade, Oluwaseyi A.; Agunsoye, Johnson O.; Oke, Sunday A.

    2018-01-01

    In the metal removal process industry, the current practice to optimise cutting parameters adoptsa conventional method. It is based on trial and error, in which the machine operator uses experience,coupled with handbook guidelines to determine optimal parametric values of choice. This method is notaccurate, is time-consuming and costly. Therefore, there is a need for a method that is scientific, costeffectiveand precise. Keeping this in mind, a different direction for process optimisation is ...

  17. A study of certain Monte Carlo search and optimisation methods

    Budd, C.

    1984-11-01

    Studies are described which might lead to the development of a search and optimisation facility for the Monte Carlo criticality code MONK. The facility envisaged could be used to maximise a function of k-effective with respect to certain parameters of the system or, alternatively, to find the system (in a given range of systems) for which that function takes a given value. (UK)

  18. Global nuclear industry views: challenges arising from the evolution of the optimisation principle in radiological protection

    Saint-Pierre, S.

    2012-01-01

    Over the last few decades, the steady progress achieved in reducing planned exposures of both workers and the public has been admirable in the nuclear sector. However, the disproportionate focus on tiny public exposures and radioactive discharges associated with normal operations came at a high price, and the quasi-denial of a risk of major accident and related weaknesses in emergency preparedness and response came at an even higher price. Fukushima has unfortunately taught us that radiological protection (RP) for emergency and post-emergency situations can be much more than a simple evacuation that lasts 24–48 h, with people returning safely to their homes soon afterwards. On optimisation of emergency and post-emergency exposures, the only ‘show in town’ in terms of international RP policy improvements has been the issuance of the 2007 Recommendations of the International Commission on Radiological Protection (ICRP). However, no matter how genuine these improvements are, they have not been ‘road tested’ on the practical reality of severe accidents. Post-Fukushima, there is a compelling case to review the practical adequacy of key RP notions such as optimisation, evacuation, sheltering, and reference levels for workers and the public, and to amend these notions with a view to making the international RP system more useful in the event of a severe accident. On optimisation of planned exposures, the reality is that, nowadays, margins for further reductions of public doses in the nuclear sector are very small, and the smaller the dose, the greater the extra effort needed to reduce the dose further. If sufficient caution is not exercised in the use of RP notions such as dose constraints, there is a real risk of challenging nuclear power technologies beyond safety reasons. For nuclear new build, it is the optimisation of key operational parameters of nuclear power technologies (not RP) that is of paramount importance to improve their overall efficiency. In

  19. Global nuclear industry views: challenges arising from the evolution of the optimisation principle in radiological protection.

    Saint-Pierre, S

    2012-01-01

    Over the last few decades, the steady progress achieved in reducing planned exposures of both workers and the public has been admirable in the nuclear sector. However, the disproportionate focus on tiny public exposures and radioactive discharges associated with normal operations came at a high price, and the quasi-denial of a risk of major accident and related weaknesses in emergency preparedness and response came at an even higher price. Fukushima has unfortunately taught us that radiological protection (RP) for emergency and post-emergency situations can be much more than a simple evacuation that lasts 24-48 h, with people returning safely to their homes soon afterwards. On optimisation of emergency and post-emergency exposures, the only 'show in town' in terms of international RP policy improvements has been the issuance of the 2007 Recommendations of the International Commission on Radiological Protection (ICRP). However, no matter how genuine these improvements are, they have not been 'road tested' on the practical reality of severe accidents. Post-Fukushima, there is a compelling case to review the practical adequacy of key RP notions such as optimisation, evacuation, sheltering, and reference levels for workers and the public, and to amend these notions with a view to making the international RP system more useful in the event of a severe accident. On optimisation of planned exposures, the reality is that, nowadays, margins for further reductions of public doses in the nuclear sector are very small, and the smaller the dose, the greater the extra effort needed to reduce the dose further. If sufficient caution is not exercised in the use of RP notions such as dose constraints, there is a real risk of challenging nuclear power technologies beyond safety reasons. For nuclear new build, it is the optimisation of key operational parameters of nuclear power technologies (not RP) that is of paramount importance to improve their overall efficiency. In pursuing

  20. Optimising Job-Shop Functions Utilising the Score-Function Method

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ......During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging...... of a Job-Shop can be handled by the SF method....

  1. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  2. Optimisation and validation of methods to assess single nucleotide polymorphisms (SNPs) in archival histological material

    Andreassen, C N; Sørensen, Flemming Brandt; Overgaard

    2004-01-01

    only archival specimens are available. This study was conducted to validate protocols optimised for assessment of SNPs based on paraffin embedded, formalin fixed tissue samples.PATIENTS AND METHODS: In 137 breast cancer patients, three TGFB1 SNPs were assessed based on archival histological specimens...... precipitation).RESULTS: Assessment of SNPs based on archival histological material is encumbered by a number of obstacles and pitfalls. However, these can be widely overcome by careful optimisation of the methods used for sample selection, DNA extraction and PCR. Within 130 samples that fulfil the criteria...

  3. Patient and staff dose optimisation in nuclear medicine diagnosis methods

    Marta Wasilewska-Radwanska; Katarzyna Natkaniec

    2007-01-01

    , control of detector uniformity. The test for rotating gamma camera additionally demands controlling precision of rotation and image system resolution. The radioisotope and chemical purity of the radiopharmaceuticals are controlled, too. The process of 99m Tc elution efficacity from 99 Mo-generator is tested and the contents of 99 Mo radioisotope in eluate is measured. The radioisotope diagnosis of brain, heart, thyroid, stomach, liver, kidney and bones as well as lymphoscintigraphy are performed. The procedure used for patient and staff's dose optimisation consists of: 1) control dose measurement performed with dosemeter on the tissue-like phantom including selected radiopharmaceutical of the same radioactivity as the one which will be applied to patient, 2) calculation of the patient dose rate, 3) calculation of the staff dose based on the results of personnel dosemeters (films or TLD), 4) preparation of the Quality Assurance instruction for the staff responsible for patient's safety. Independently of the patient and staff dose optimisation, the Quality Control of gamma camera equipments e.g. SPECT X-Ring Nucline (MEDISO) is checked for uniformity of the image from a radiopharmaceutical sample and center of rotation according to the producer's manual instruction. In addition, special lectures and courses for staff are organized several times per year to ensure a Continuous Professional Development (CPD) in the field of Quality Assurance and Quality Control.

  4. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  5. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  6. A pilot investigation to optimise methods for a future satiety preload study

    Hobden, Mark R.; Guérin-Deremaux, Laetitia; Commane, Daniel M.; Rowland, Ian; Gibson, Glenn R.; Kennedy, Orla B.

    2017-01-01

    Background Preload studies are used to investigate the satiating effects of foods and food ingredients. However, the design of preload studies is complex, with many methodological considerations influencing appetite responses. The aim of this pilot investigation was to determine acceptability, and optimise methods, for a future satiety preload study. Specifically, we investigated the effects of altering (i) energy intake at a standardised breakfast (gender-specific or non-gender specific), an...

  7. Artificial Intelligence Mechanisms on Interactive Modified Simplex Method with Desirability Function for Optimising Surface Lapping Process

    Pongchanun Luangpaiboon

    2014-01-01

    Full Text Available A study has been made to optimise the influential parameters of surface lapping process. Lapping time, lapping speed, downward pressure, and charging pressure were chosen from the preliminary studies as parameters to determine process performances in terms of material removal, lap width, and clamp force. The desirability functions of the-nominal-the-best were used to compromise multiple responses into the overall desirability function level or D response. The conventional modified simplex or Nelder-Mead simplex method and the interactive desirability function are performed to optimise online the parameter levels in order to maximise the D response. In order to determine the lapping process parameters effectively, this research then applies two powerful artificial intelligence optimisation mechanisms from harmony search and firefly algorithms. The recommended condition of (lapping time, lapping speed, downward pressure, and charging pressure at (33, 35, 6.0, and 5.0 has been verified by performing confirmation experiments. It showed that the D response level increased to 0.96. When compared with the current operating condition, there is a decrease of the material removal and lap width with the improved process performance indices of 2.01 and 1.14, respectively. Similarly, there is an increase of the clamp force with the improved process performance index of 1.58.

  8. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang

    2015-01-01

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method

  9. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)

    2015-09-08

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.

  10. Optimisation of a double-centrifugation method for preparation of canine platelet-rich plasma.

    Shin, Hyeok-Soo; Woo, Heung-Myong; Kang, Byung-Jae

    2017-06-26

    Platelet-rich plasma (PRP) has been expected for regenerative medicine because of its growth factors. However, there is considerable variability in the recovery and yield of platelets and the concentration of growth factors in PRP preparations. The aim of this study was to identify optimal relative centrifugal force and spin time for the preparation of PRP from canine blood using a double-centrifugation tube method. Whole blood samples were collected in citrate blood collection tubes from 12 healthy beagles. For the first centrifugation step, 10 different run conditions were compared to determine which condition produced optimal recovery of platelets. Once the optimal condition was identified, platelet-containing plasma prepared using that condition was subjected to a second centrifugation to pellet platelets. For the second centrifugation, 12 different run conditions were compared to identify the centrifugal force and spin time to produce maximal pellet recovery and concentration increase. Growth factor levels were estimated by using ELISA to measure platelet-derived growth factor-BB (PDGF-BB) concentrations in optimised CaCl 2 -activated platelet fractions. The highest platelet recovery rate and yield were obtained by first centrifuging whole blood at 1000 g for 5 min and then centrifuging the recovered platelet-enriched plasma at 1500 g for 15 min. This protocol recovered 80% of platelets from whole blood and increased platelet concentration six-fold and produced the highest concentration of PDGF-BB in activated fractions. We have described an optimised double-centrifugation tube method for the preparation of PRP from canine blood. This optimised method does not require particularly expensive equipment or high technical ability and can readily be carried out in a veterinary clinical setting.

  11. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs

  12. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Almen, A J

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  13. Optimisation by hierarchical search

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  14. Artificial neural network optimisation for monthly average daily global solar radiation prediction

    Alsina, Emanuel Federico; Bortolini, Marco; Gamberi, Mauro; Regattieri, Alberto

    2016-01-01

    Highlights: • Prediction of the monthly average daily global solar radiation over Italy. • Multi-location Artificial Neural Network (ANN) model: 45 locations considered. • Optimal ANN configuration with 7 input climatologic/geographical parameters. • Statistical indicators: MAPE, NRMSE, MPBE. - Abstract: The availability of reliable climatologic data is essential for multiple purposes in a wide set of anthropic activities and operative sectors. Frequently direct measures present spatial and temporal lacks so that predictive approaches become of interest. This paper focuses on the prediction of the Monthly Average Daily Global Solar Radiation (MADGSR) over Italy using Artificial Neural Networks (ANNs). Data from 45 locations compose the multi-location ANN training and testing sets. For each location, 13 input parameters are considered, including the geographical coordinates and the monthly values for the most frequently adopted climatologic parameters. A subset of 17 locations is used for ANN training, while the testing step is against data from the remaining 28 locations. Furthermore, the Automatic Relevance Determination method (ARD) is used to point out the most relevant input for the accurate MADGSR prediction. The ANN best configuration includes 7 parameters, only, i.e. Top of Atmosphere (TOA) radiation, day length, number of rainy days and average rainfall, latitude and altitude. The correlation performances, expressed through statistical indicators as the Mean Absolute Percentage Error (MAPE), range between 1.67% and 4.25%, depending on the number and type of the chosen input, representing a good solution compared to the current standards.

  15. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  16. Computer Based Optimisation Rutines

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  17. Flows method in global analysis

    Duong Minh Duc.

    1994-12-01

    We study the gradient flows method for W r,p (M,N) where M and N are Riemannian manifold and r may be less than m/p. We localize some global analysis problem by constructing gradient flows which only change the value of any u in W r,p (M,N) in a local chart of M. (author). 24 refs

  18. The FRISBEE tool, a software for optimising the trade-off between food quality, energy use, and global warming impact of cold chains

    Gwanpua, S.G.; Verboven, P.; Leducq, D.; Brown, T.; Verlinden, B.E.; Bekele, E.; Aregawi, W. Evans, J.; Foster, A.; Duret, S.; Hoang, H.M.; Sluis, S. van der; Wissink, E.; Hendriksen, L.J.A.M.; Taoukis, P.; Gogou, E.; Stahl, V.; El Jabri, M.; Le Page, J.F.; Claussen, I.; Indergård, E.; Nicolai, B.M.; Alvarez, G.; Geeraerd, A.H.

    2015-01-01

    Food quality (including safety) along the cold chain, energy use and global warming impact of refrigeration systems are three key aspects in assessing cold chain sustainability. In this paper, we present the framework of a dedicated software, the FRISBEE tool, for optimising quality of refrigerated

  19. Optimisation for offshore wind farm cable connection layout using adaptive particle swarm optimisation minimum spanning tree method

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2016-01-01

    operational requirement and this constraint should be considered during the MST formulation process. Hence, traditional MST algorithm cannot ensure a minimal cable investment layout. In this paper, a new method to optimize the offshore wind farm cable connection layout is presented. The algorithm...... with the optimized cable connection layout. The proposed method is compared with the MST and Dynamic MST (DMST) methods and simulation results show the effectiveness of the proposed method....

  20. Structural optimisation of a high speed Organic Rankine Cycle generator using a genetic algorithm and a finite element method

    Palko, S. [Machines Division, ABB industry Oy, Helsinki (Finland)

    1997-12-31

    The aim in this work is to design a 250 kW high speed asynchronous generator using a genetic algorithm and a finite element method for Organic Rankine Cycle. The characteristics of the induction motors are evaluated using two-dimensional finite element method (FEM) The movement of the rotor and the non-linearity of the iron is included. In numerical field problems it is possible to find several local extreme for an optimisation problem, and therefore the algorithm has to be capable of determining relevant changes, and to avoid trapping to a local minimum. In this work the electromagnetic (EM) losses at the rated point are minimised. The optimisation includes the air gap region. Parallel computing is applied to speed up optimisation. (orig.) 2 refs.

  1. Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =

    Ruby, Margaux

    utilisee toutes les combinaisons possibles. Cette recherche exhaustive nous a fourni l'optimal global; ainsi, la solution de notre algorithme doit se rapprocher le plus possible de la recherche exhaustive afin de prouver qu'il donne des resultats proche de l'optimal global. Une seconde comparaison a ete effectuee entre les resultats fournis par l'algorithme et ceux du Flight Management System (FMS) qui est un systeme d'avionique situe dans le cockpit de l'avion fournissant la route a suivre afin d'optimiser la trajectoire. Le but est de prouver que l'algorithme de la recherche de l'harmonie donne de meilleurs resultats que l'algorithme implemente dans le FMS.

  2. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  3. Optimisation Of Process Parameters In High Energy Mixing As A Method Of Cohesive Powder Flowability Improvement

    Leś Karolina

    2015-12-01

    Full Text Available Flowability of fine, highly cohesive calcium carbonate powder was improved using high energy mixing (dry coating method consisting in coating of CaCO3 particles with a small amount of Aerosil nanoparticles in a planetary ball mill. As measures of flowability the angle of repose and compressibility index were used. As process variables the mixing speed, mixing time, and the amount of Aerosil and amount of isopropanol were chosen. To obtain optimal values of the process variables, a Response Surface Methodology (RSM based on Central Composite Rotatable Design (CCRD was applied. To match the RSM requirements it was necessary to perform a total of 31 experimental tests needed to complete mathematical model equations. The equations that are second-order response functions representing the angle of repose and compressibility index were expressed as functions of all the process variables. Predicted values of the responses were found to be in a good agreement with experimental values. The models were presented as 3-D response surface plots from which the optimal values of the process variables could be correctly assigned. The proposed, mechanochemical method of powder treatment coupled with response surface methodology is a new, effective approach to flowability of cohesive powder improvement and powder processing optimisation.

  4. Optimised method to estimate octanol water distribution coefficient (logD) in a high throughput format.

    Low, Ying Wei Ivan; Blasco, Francesca; Vachaspati, Prakash

    2016-09-20

    Lipophilicity is one of the molecular properties assessed in early drug discovery. Direct measurement of the octanol-water distribution coefficient (logD) requires an analytical method with a large dynamic range or multistep dilutions, as the analyte's concentrations span across several orders of magnitude. In addition, water/buffer and octanol phases which have very different polarity could lead to matrix effects and affect the LC-MS response, leading to erroneous logD values. Most compound libraries use DMSO stocks as it greatly reduces the sample requirement but the presence of DMSO has been shown to underestimate the lipophilicity of the analyte. The present work describes the development of an optimised shake flask logD method using deepwell 96 well plate that addresses the issues related to matrix effects, DMSO concentration and incubation conditions and is also amenable to high throughput. Our results indicate that the equilibrium can be achieved within 30min by flipping the plate on its side while even 0.5% of DMSO is not tolerated in the assay. This study uses the matched matrix concept to minimise the errors in analysing the two phases namely buffer and octanol in LC-MS. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  6. Estimation of transversely isotropic material properties from magnetic resonance elastography using the optimised virtual fields method.

    Miller, Renee; Kolipaka, Arunark; Nash, Martyn P; Young, Alistair A

    2018-03-12

    Magnetic resonance elastography (MRE) has been used to estimate isotropic myocardial stiffness. However, anisotropic stiffness estimates may give insight into structural changes that occur in the myocardium as a result of pathologies such as diastolic heart failure. The virtual fields method (VFM) has been proposed for estimating material stiffness from image data. This study applied the optimised VFM to identify transversely isotropic material properties from both simulated harmonic displacements in a left ventricular (LV) model with a fibre field measured from histology as well as isotropic phantom MRE data. Two material model formulations were implemented, estimating either 3 or 5 material properties. The 3-parameter formulation writes the transversely isotropic constitutive relation in a way that dissociates the bulk modulus from other parameters. Accurate identification of transversely isotropic material properties in the LV model was shown to be dependent on the loading condition applied, amount of Gaussian noise in the signal, and frequency of excitation. Parameter sensitivity values showed that shear moduli are less sensitive to noise than the other parameters. This preliminary investigation showed the feasibility and limitations of using the VFM to identify transversely isotropic material properties from MRE images of a phantom as well as simulated harmonic displacements in an LV geometry. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Development, optimisation, and application of ICP-SFMS methods for the measurement of isotope ratios

    Stuerup, S.

    2000-07-01

    The measurement of isotopic composition and isotope ratios in biological and environmental samples requires sensitive, precise, and accurate analytical techniques. The analytical techniques used are traditionally based on mass spectrometry, among these techniques is the ICP-SFMS technique, which became commercially available in the mid 1990s. This technique is characterised by high sensitivity, low background, and the ability to separate analyte signals from spectral interferences. These features are beneficial for the measurement of isotope ratios and enable the measurement of isotope ratios of elements, which it has not previously been possible to measure due to either spectral interferences or poor sensitivity. The overall purpose of the project was to investigate the potential of the single detector ICP-SFMS technique for the measurement of isotope ratios in biological and environmental samples. One part of the work has focused on the fundamental aspects of the ICP-SFMS technique with special emphasize on the features important to the measurement of isotope ratios, while another part has focused on the development, optimisation and application of specific methods for the measurement of isotope ratios of elements of nutritional interest and radionuclides. The fundamental aspects of the ICP-SFMS technique were investigated theoretically and experimentally by the measurement of isotope ratios applying different experimental conditions. It was demonstrated that isotope ratios could be measured reliably using ICP-SFMS by educated choice of acquisition parameters, scanning mode, mass discrimination correction, and by eliminating the influence of detector dead time. Applying the knowledge gained through the fundamental study, ICP-SFMS methods for the measurement of isotope ratios of calcium, zinc, molybdenum and iron in human samples and a method for the measurement of plutonium isotope ratios and ultratrace levels of plutonium and neptunium in environmental samples

  8. Optimisation of a direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores.

    Henczka, Marek; Djas, Małgorzata; Filipek, Katarzyna

    2013-01-01

    A direct plating method for the detection and enumeration of Alicyclobacillus acidoterrestris spores has been optimised. The results of the application of four types of growth media (BAT agar, YSG agar, K agar and SK agar) regarding the recovery and enumeration of A. acidoterrestris spores were compared. The influence of the type of applied growth medium, heat shock conditions, incubation temperature, incubation time, plating technique and the presence of apple juice in the sample on the accuracy of the detection and enumeration of A. acidoterrestris spores was investigated. Among the investigated media, YSG agar was the most sensitive medium, and its application resulted in the highest recovery of A. acidoterrestris spores, while K agar and BAT agar were the least suitable media. The effect of the heat shock time on the recovery of spores was negligible. When there was a low concentration of spores in a sample, the membrane filtration method was superior to the spread plating method. The obtained results show that heat shock carried out at 80°C for 10 min and plating samples in combination with membrane filtration on YSG agar, followed by incubation at 46°C for 3 days provided the optimal conditions for the detection and enumeration of A. acidoterrestris spores. Application of the presented method allows highly efficient, fast and sensitive identification and enumeration of A. acidoterrestris spores in food products. This methodology will be useful for the fruit juice industry for identifying products contaminated with A. acidoterrestris spores, and its practical application may prevent economic losses for manufacturers. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A method to derive fixed budget results from expected optimisation times

    Doerr, Benjamin; Jansen, Thomas; Witt, Carsten

    2013-01-01

    At last year's GECCO a novel perspective for theoretical performance analysis of evolutionary algorithms and other randomised search heuristics was introduced that concentrates on the expected function value after a pre-defined number of steps, called budget. This is significantly different from...... the common perspective where the expected optimisation time is analysed. While there is a huge body of work and a large collection of tools for the analysis of the expected optimisation time the new fixed budget perspective introduces new analytical challenges. Here it is shown how results on the expected...... optimisation time that are strengthened by deviation bounds can be systematically turned into fixed budget results. We demonstrate our approach by considering the (1+1) EA on LeadingOnes and significantly improving previous results. We prove that deviating from the expected time by an additive term of ω(n3...

  10. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    Maheri, Alireza

    2014-01-01

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  11. Optimisation of the Laser Cutting Process

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  12. Suitability of monitoring methods for the optimisation of Radiological Protection in the case of internal exposure through inhalation

    Degrange, J.P.; Gibert, B.; Basire, D.

    2000-01-01

    The radiological protection system recommended by the International Commission for Radiological Protection (ICRP) for justified practices relied pn the limitation and optimisation principles. The monitoring of internal exposure is most often based on the periodic assessment of individual exposure in order to essentially insure the simple compliance with the annual dose limits. Optimisation of protection implies a realistic, sensitive and analytical assessment of individual and collective exposures in order to allow the indentification of the main sources of exposure (main sources of contamination, most exposed operators, work activities contributing the most to the exposure) and the selection of the optimal protection options. Therefore the monitoring methods must allow the realistic assessment of individual dose levels far lower than annual limits together with measurements as frequent as possible. The aim of this presentation is to discuss the ability of various monitoring methods (collective and individual air sampling, in vivo and in vitro bioassays) to fulfil those needs. This discussion is illustrated by the particular case of the internal exposure to natural uranium compounds through inhalation. Firstly, the sensitivity and the degree to which each monitoring method is realistic are quantified and discussed on the basis of the application of the new ICRP dosimetric model, and their analytical capability for the optimisation of radiological protection is then indicated. Secondly, a case study is presented which shows the capability of individual air sampling techniques to analyse the exposure of the workers and the inadequacy of static air sampling to accurately estimate the exposures when contamination varies significantly over time and space in the workstations. As far as exposure to natural uranium compounds through inhalation is concerned, the study for assessing the sensitivity, analytic ability and accuracy of the different measuring systems shows that

  13. Optimisation of production from an oil-reservoir using augmented Lagrangian methods

    Doublet, Daniel Christopher

    2007-07-01

    This work studies the use of augmented Lagrangian methods for water flooding production optimisation from an oil reservoir. Commonly, water flooding is used as a means to enhance oil recovery, and due to heterogeneous rock properties, water will flow with different velocities throughout the reservoir. Due to this, water breakthrough can occur when great regions of the reservoir are still unflooded so that much of the oil may become 'trapped' in the reservoir. To avoid or reduce this problem, one can control the production so that the oil recovery rate is maximised, or alternatively the net present value (NPV) of the reservoir is maximised. We have considered water flooding, using smart wells. Smart wells with down-hole valves gives us the possibility to control the injection/production at each of the valve openings along the well, so that it is possible to control the flowregime. One can control the injection/production at all valve openings, and the setting of the valves may be changed during the production period, which gives us a great deal of control over the production and we want to control the injection/ production so that the profit obtained from the reservoir is maximised. The problem is regarded as an optimal control problem, and it is formulated as an augmented Lagrangian saddle point problem. We develop a method for optimal control based on solving the Karush-Kuhn-Tucker conditions for the augmented Lagrangian functional, a method, which to my knowledge has not been presented in the literature before. The advantage of this method is that we do not need to solve the forward problem for each new estimate of the control variables, which reduces the computational effort compared to other methods that requires the solution of the forward problem every time we find a new estimate of the control variables, such as the adjoint method. We test this method on several examples, where it is compared to the adjoint method. Our numerical experiments show

  14. Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM) and genetic algorithm method (GA)

    Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.

  15. Global Topology Optimisation

    2016-10-31

    boundary Γ, but the boundary points are not equally spaced along Γ ( recall Fig. 2). The idea is that a given boundary Γ has many possible discretisations...b+ b a (a) (b) (B) = 4 abc radius, R m ea n cu rv at ur e, ̄ 0 100 200 300 4000 0.01 0.02 0.03(c) 1/R geometric finite dierence perturbation...perimeter of a curve. The setting is illustrated in Fig. 9. Recall the sensitivity is defined by (3). For a curve that is represented by a set of

  16. GLOBAL AND STRICT CURVE FITTING METHOD

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  17. Global Convergence of a Modified LS Method

    Liu JinKui

    2012-01-01

    Full Text Available The LS method is one of the effective conjugate gradient methods in solving the unconstrained optimization problems. The paper presents a modified LS method on the basis of the famous LS method and proves the strong global convergence for the uniformly convex functions and the global convergence for general functions under the strong Wolfe line search. The numerical experiments show that the modified LS method is very effective in practice.

  18. A pilot investigation to optimise methods for a future satiety preload study.

    Hobden, Mark R; Guérin-Deremaux, Laetitia; Commane, Daniel M; Rowland, Ian; Gibson, Glenn R; Kennedy, Orla B

    2017-01-01

    Preload studies are used to investigate the satiating effects of foods and food ingredients. However, the design of preload studies is complex, with many methodological considerations influencing appetite responses. The aim of this pilot investigation was to determine acceptability, and optimise methods, for a future satiety preload study. Specifically, we investigated the effects of altering (i) energy intake at a standardised breakfast (gender-specific or non-gender specific), and (ii) the duration between mid-morning preload and ad libitum lunch meal, on morning appetite scores and energy intake at lunch. Participants attended a single study visit. Female participants consumed a 214-kcal breakfast ( n  = 10) or 266-kcal breakfast ( n  = 10), equivalent to 10% of recommended daily energy intakes for females and males, respectively. Male participants ( n  = 20) consumed a 266-kcal breakfast. All participants received a 250-ml orange juice preload 2 h after breakfast. The impact of different study timings was evaluated in male participants, with 10 males following one protocol (protocol 1) and 10 males following another (protocol 2). The duration between preload and ad libitum lunch meal was 2 h (protocol 1) or 2.5 h (protocol 2), with the ad libitum lunch meal provided at 12.00 or 13.00, respectively. All female participants followed protocol 2. Visual analogue scale (VAS) questionnaires were used to assess appetite responses and food/drink palatability. Correlation between male and female appetite scores was higher with the provision of a gender-specific breakfast, compared to non-gender-specific breakfast (Pearson correlation of 0.747 and 0.479, respectively). No differences in subjective appetite or ad libitum energy intake were found between protocols 1 and 2. VAS mean ratings of liking, enjoyment, and palatability were all > 66 out of 100 mm for breakfast, preload, and lunch meals. The findings of this pilot study confirm the acceptability

  19. Characterisation and optimisation of a method for the detection and quantification of atmospherically relevant carbonyl compounds in aqueous medium

    Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.

    2015-01-01

    Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds (VOC). Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. Thus a method was systematically characterised and improved to quantify carbonyl compounds. Quantification with the present method can be carried out for each carbonyl compound sampled in the aqueous phase regardless of their source. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). The main advantage of the improved method presented in this study is the low detection limit in the range of 0.01 and 0.17 μmol L-1 depending on carbonyl compounds. Furthermore best results were found for extraction with dichloromethane for 30 min followed by derivatisation with PFBHA for 24 h with 0.43 mg mL-1 PFBHA at a pH value of 3. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione.

  20. A systematic procedure to optimise dose and image quality for the measurement of inter-vertebral angles from lateral spinal projections using Cobb and superimposition methods.

    Al Qaroot, Bashar; Hogg, Peter; Twiste, Martin; Howard, David

    2014-01-01

    Patients with vertebral column deformations are exposed to high risks associated with ionising radiation exposure. Risks are further increased due to the serial X-ray images that are needed to measure and asses their spinal deformation using Cobb or superimposition methods. Therefore, optimising such X-ray practice, via reducing dose whilst maintaining image quality, is a necessity. With a specific focus on lateral thoraco-lumbar images for Cobb and superimposition measurements, this paper outlines a systematic procedure to the optimisation of X-ray practice. Optimisation was conducted based on suitable image quality from minimal dose. Image quality was appraised using a visual-analogue-rating-scale, and Monte-Carlo modelling was used for dose estimation. The optimised X-ray practice was identified by imaging healthy normal-weight male adult living human volunteers. The optimised practice consisted of: anode towards the head, broad focus, no OID or grid, 80 kVp, 32 mAs and 130 cm SID. Images of suitable quality for laterally assessing spinal conditions using Cobb or superimposition measurements were produced from an effective dose of 0.05 mSv, which is 83% less than the average effective dose used in the UK for lateral thoracic/lumbar exposures. This optimisation procedure can be adopted and use for optimisation of other radiographic techniques.

  1. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  2. Optimising social information by game theory and ant colony method to enhance routing protocol in opportunistic networks

    Chander Prabha

    2016-09-01

    Full Text Available The data loss and disconnection of nodes are frequent in the opportunistic networks. The social information plays an important role in reducing the data loss because it depends on the connectivity of nodes. The appropriate selection of next hop based on social information is critical for improving the performance of routing in opportunistic networks. The frequent disconnection problem is overcome by optimising the social information with Ant Colony Optimization method which depends on the topology of opportunistic network. The proposed protocol is examined thoroughly via analysis and simulation in order to assess their performance in comparison with other social based routing protocols in opportunistic network under various parameters settings.

  3. A method for the calculation of collision strengths for complex atomic structures based on Slater parameter optimisation

    Fawcett, B.C.; Mason, H.E.

    1989-02-01

    This report presents details of a new method to enable the computation of collision strengths for complex ions which is adapted from long established optimisation techniques previously applied to the calculation of atomic structures and oscillator strengths. The procedure involves the adjustment of Slater parameters so that they determine improved energy levels and eigenvectors. They provide a basis for collision strength calculations in ions where ab initio computations break down or result in reducible errors. This application is demonstrated through modifications of the DISTORTED WAVE collision code and SUPERSTRUCTURE atomic-structure code which interface via a transformation code JAJOM which processes their output. (author)

  4. Rotational degree-of-freedom synthesis: An optimised finite difference method for non-exact data

    Gibbons, T. J.; Öztürk, E.; Sims, N. D.

    2018-01-01

    Measuring the rotational dynamic behaviour of a structure is important for many areas of dynamics such as passive vibration control, acoustics, and model updating. Specialist and dedicated equipment is often needed, unless the rotational degree-of-freedom is synthesised based upon translational data. However, this involves numerically differentiating the translational mode shapes to approximate the rotational modes, for example using a finite difference algorithm. A key challenge with this approach is choosing the measurement spacing between the data points, an issue which has often been overlooked in the published literature. The present contribution will for the first time prove that the use of a finite difference approach can be unstable when using non-exact measured data and a small measurement spacing, for beam-like structures. Then, a generalised analytical error analysis is used to propose an optimised measurement spacing, which balances the numerical error of the finite difference equation with the propagation error from the perturbed data. The approach is demonstrated using both numerical and experimental investigations. It is shown that by obtaining a small number of test measurements it is possible to optimise the measurement accuracy, without any further assumptions on the boundary conditions of the structure.

  5. A review of patient dose and optimisation methods in adult and paediatric CT scanning

    Dougeni, E.; Faulkner, K.; Panayiotakis, G.

    2012-01-01

    Highlights: ► CT scanning frequency has grown with the development of new clinical applications. ► Up to 32-fold dose variation was observed for similar type of procedures. ► Scanning parameters should be optimised for patient size and clinical indication. ► Cancer risks knowledge amongst physicians of certain specialties was poor. ► A significant number of non-indicated CT scans could be eliminated. - Abstract: An increasing number of publications and international reports on computed tomography (CT) have addressed important issues on optimised imaging practice and patient dose. This is partially due to recent technological developments as well as to the striking rise in the number of CT scans being requested. CT imaging has extended its role to newer applications, such as cardiac CT, CT colonography, angiography and urology. The proportion of paediatric patients undergoing CT scans has also increased. The published scientific literature was reviewed to collect information regarding effective dose levels during the most common CT examinations in adults and paediatrics. Large dose variations were observed (up to 32-fold) with some individual sites exceeding the recommended dose reference levels, indicating a large potential to reduce dose. Current estimates on radiation-related cancer risks are alarming. CT doses account for about 70% of collective dose in the UK and are amongst the highest in diagnostic radiology, however the majority of physicians underestimate the risk, demonstrating a decreased level of awareness. Exposure parameters are not always adjusted appropriately to the clinical question or to patient size, especially for children. Dose reduction techniques, such as tube-current modulation, low-tube voltage protocols, prospective echocardiography-triggered coronary angiography and iterative reconstruction algorithms can substantially decrease doses. An overview of optimisation studies is provided. The justification principle is discussed along

  6. Simulation optimisation

    Anon

    2010-01-01

    Over the past decade there has been a significant advance in flotation circuit optimisation through performance benchmarking using metallurgical modelling and steady-state computer simulation. This benchmarking includes traditional measures, such as grade and recovery, as well as new flotation measures, such as ore floatability, bubble surface area flux and froth recovery. To further this optimisation, Outotec has released its HSC Chemistry software with simulation modules. The flotation model developed by the AMIRA P9 Project, of which Outotec is a sponsor, is regarded by industry as the most suitable flotation model to use for circuit optimisation. This model incorporates ore floatability with flotation cell pulp and froth parameters, residence time, entrainment and water recovery. Outotec's HSC Sim enables you to simulate mineral processes in different levels, from comminution circuits with sizes and no composition, through to flotation processes with minerals by size by floatability components, to full processes with true particles with MLA data.

  7. Development of an optimised 1:1 physiotherapy intervention post first-time lumbar discectomy: a mixed-methods study

    Rushton, A; White, L; Heap, A; Heneghan, N; Goodwin, P

    2016-01-01

    Objectives To develop an optimised 1:1 physiotherapy intervention that reflects best practice, with flexibility to tailor management to individual patients, thereby ensuring patient-centred practice. Design Mixed-methods combining evidence synthesis, expert review and focus groups. Setting Secondary care involving 5 UK specialist spinal centres. Participants A purposive panel of clinical experts from the 5 spinal centres, comprising spinal surgeons, inpatient and outpatient physiotherapists, provided expert review of the draft intervention. Purposive samples of patients (n=10) and physiotherapists (n=10) (inpatient/outpatient physiotherapists managing patients with lumbar discectomy) were invited to participate in the focus groups at 1 spinal centre. Methods A draft intervention developed from 2 systematic reviews; a survey of current practice and research related to stratified care was circulated to the panel of clinical experts. Lead physiotherapists collaborated with physiotherapy and surgeon colleagues to provide feedback that informed the intervention presented at 2 focus groups investigating acceptability to patients and physiotherapists. The focus groups were facilitated by an experienced facilitator, recorded in written and tape-recorded forms by an observer. Tape recordings were transcribed verbatim. Data analysis, conducted by 2 independent researchers, employed an iterative and constant comparative process of (1) initial descriptive coding to identify categories and subsequent themes, and (2) deeper, interpretive coding and thematic analysis enabling concepts to emerge and overarching pattern codes to be identified. Results The intervention reflected best available evidence and provided flexibility to ensure patient-centred care. The intervention comprised up to 8 sessions of 1:1 physiotherapy over 8 weeks, starting 4 weeks postsurgery. The intervention was acceptable to patients and physiotherapists. Conclusions A rigorous process informed an

  8. Hydroxyapatite, fluor-hydroxyapatite and fluorapatite produced via the sol-gel method. Optimisation, characterisation and rheology.

    Tredwin, Christopher J; Young, Anne M; Georgiou, George; Shin, Song-Hee; Kim, Hae-Won; Knowles, Jonathan C

    2013-02-01

    Currently, most titanium implant coatings are made using hydroxyapatite and a plasma spraying technique. There are however limitations associated with plasma spraying processes including poor adherence, high porosity and cost. An alternative method utilising the sol-gel technique offers many potential advantages but is currently lacking research data for this application. It was the objective of this study to characterise and optimise the production of Hydroxyapatite (HA), fluorhydroxyapatite (FHA) and fluorapatite (FA) using a sol-gel technique and assess the rheological properties of these materials. HA, FHA and FA were synthesised by a sol-gel method. Calcium nitrate and triethylphosphite were used as precursors under an ethanol-water based solution. Different amounts of ammonium fluoride (NH4F) were incorporated for the preparation of the sol-gel derived FHA and FA. Optimisation of the chemistry and subsequent characterisation of the sol-gel derived materials was carried out using X-ray Diffraction (XRD) and Differential Thermal Analysis (DTA). Rheology of the sol-gels was investigated using a viscometer and contact angle measurement. A protocol was established that allowed synthesis of HA, FHA and FA that were at least 99% phase pure. The more fluoride incorporated into the apatite structure; the lower the crystallisation temperature, the smaller the unit cell size (changes in the a-axis), the higher the viscosity and contact angle of the sol-gel derived apatite. A technique has been developed for the production of HA, FHA and FA by the sol-gel technique. Increasing fluoride substitution in the apatite structure alters the potential coating properties. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  9. Multiobjective optimisation of bogie suspension to boost speed on curves

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  10. Global optimization methods for engineering design

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  11. Global methods for reinforced concrete slabs

    Hoffmann, A.; Lepareux, M.; Combescure, A.

    1985-08-01

    This paper develops the global method strategy to compute elastoplastic thin shells or beams. It is shown how this methodology can be applied to the case of reinforced concrete structures. Two cases of applications are presented: one static, the other dynamic. The numerical results are compared to experimental data

  12. Strategies and Methods for Optimisation of Protection against Internal Exposures of Workers from Industrial Natural Sources (SMOPIE)

    Van der Steen, J.; Timmermans, C.W.M.; Van Weers, A.W.; Degrange, J.P.; Lefaure, C.; Shaw, P.V.

    2004-01-01

    The report provides summaries on the Work Packages 1 and 2 (see Annex 1 and 2 below) and describes the work carried out in Work Packages 3, 4 and 5. In addition it provides a summary of the main achievements of the project. The objective of Work Package 3 was to try to categorise exposure situations described in the case studies in terms of a limited number of exposure parameters relevant to the implementation of ALARA. It became clear that the characterisation criteria considered for the many different exposure situations in the industrial cases led to an important practical conclusion, namely that the preferred choice of the air sampling method (i.e. to implement ALARA) will be the same in all the industries considered. The aim of work package 4 (Review and evaluation of monitoring strategies and methods) was to review the technical capabilities and limitations of different forms of internal radiation monitoring. This included a consideration of monitoring strategies, methods and equipment, as appropriate. The review considered which types of monitoring (if any) are the most effective in terms of contributing to the optimisation of internal exposures (from inhalation) and whether further developments are needed, especially in relation to existing monitoring equipment. One of the main conclusions is: personal air sampling (PAS) is the best method for assessing occupational doses from inhalation of aerosols. The first step in any monitoring strategy should be an assessment of worker doses using this technique. The Appendices 1-4 of Annex 3 provide the detailed supporting material for Work Package 4. Work Package 5 provides recommended strategies, methods and tools for optimisation of internal exposures in industrial work activities involving natural radionuclides. It is based on the case studies as described in Work Package 2 and the analysis of these studies in Work Package 3. It also takes into account the assessment of monitoring strategies, methods and tools

  13. Global Optimization Ensemble Model for Classification Methods

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  14. Global Optimization Ensemble Model for Classification Methods

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  15. A constriction factor based particle swarm optimisation algorithm to solve the economic dispatch problem including losses

    Young, Steven; Montakhab, Mohammad; Nouri, Hassan

    2011-07-15

    Economic dispatch (ED) is one of the most important problems to be solved in power generation as fractional percentage fuel reductions represent significant cost savings. ED wishes to optimise the power generated by each generating unit in a system in order to find the minimum operating cost at a required load demand, whilst ensuring both equality and inequality constraints are met. For the process of optimisation, a model must be created for each generating unit. The particle swarm optimisation technique is an evolutionary computation technique with one of the most powerful methods for solving global optimisation problems. The aim of this paper is to add in a constriction factor to the particle swarm optimisation algorithm (CFBPSO). Results show that the algorithm is very good at solving the ED problem and that CFBPSO must be able to work in a practical environment and so a valve point effect with transmission losses should be included in future work.

  16. A simple method for optimising transformation of non-parametric data: an illustration by reference to cortisol assays.

    Clark, James E; Osborne, Jason W; Gallagher, Peter; Watson, Stuart

    2016-07-01

    Neuroendocrine data are typically positively skewed and rarely conform to the expectations of a Gaussian distribution. This can be a problem when attempting to analyse results within the framework of the general linear model, which relies on assumptions that residuals in the data are normally distributed. One frequently used method for handling violations of this assumption is to transform variables to bring residuals into closer alignment with assumptions (as residuals are not directly manipulated). This is often attempted through ad hoc traditional transformations such as square root, log and inverse. However, Box and Cox (Box & Cox, ) observed that these are all special cases of power transformations and proposed a more flexible method of transformation for researchers to optimise alignment with assumptions. The goal of this paper is to demonstrate the benefits of the infinitely flexible Box-Cox transformation on neuroendocrine data using syntax in spss. When applied to positively skewed data typical of neuroendocrine data, the majority (~2/3) of cases were brought into strict alignment with Gaussian distribution (i.e. a non-significant Shapiro-Wilks test). Those unable to meet this challenge showed substantial improvement in distributional properties. The biggest challenge was distributions with a high ratio of kurtosis to skewness. We discuss how these cases might be handled, and we highlight some of the broader issues associated with transformation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Method for a component-based economic optimisation in design of whole building renovation versus demolishing and rebuilding

    Morelli, Martin; Harrestrup, Maria; Svendsen, Svend

    2014-01-01

    Aim: This paper presents a two-fold evaluation method determining whether to renovate an existing building or to demolish it and thereafter erect a new building. Scope: The method determines a combination of energy saving measures that have been optimised in regards to the future cost for energy. Subsequently, the method evaluates the cost of undertaking the retrofit measures as compared to the cost of demolishing the existing building and thereafter erecting a new one. Several economically beneficial combinations of energy saving measures can be determined. All of them are a trade-off between investing in retrofit measures and buying renewable energy. The overall cost of the renovation considers the market value of the property, the investment in the renovation, the operational and maintenance costs. A multi-family building is used as an example to clearly illustrate the application of the method from macroeconomic and private financial perspectives. Conclusion: The example shows that the investment cost and future market value of the building are the dominant factors in deciding whether to renovate an existing building or to demolish it and thereafter erect a new building. Additionally, it is concluded in the example that multi-family buildings erected in the period 1850–1930 should be renovated. - highlights: • Development of a method for evaluation of renovation projects. • Determination of an economic optimal combination of various energy saving measures. • The method compared the renovation cost to those for demolishing and building new. • Decision was highly influence by the investment cost and buildings market value. • The results indicate that buildings should be renovated and not demolished

  18. Remote care of a patient with stroke in rural Trinidad: use of telemedicine to optimise global neurological care.

    Reyes, Antonio Jose; Ramcharan, Kanterpersad

    2016-08-02

    We report a patient driven home care system that successfully assisted 24/7 with the management of a 68-year-old woman after a stroke-a global illness. The patient's caregiver and physician used computer devices, smartphones and internet access for information exchange. Patient, caregiver, family and physician satisfaction, coupled with outcome and cost were indictors of quality of care. The novelty of this basic model of teleneurology is characterised by implementing a patient/caregiver driven system designed to improve access to cost-efficient neurological care, which has potential for use in primary, secondary and tertiary levels of healthcare in rural and underserved regions of the world. We suggest involvement of healthcare stakeholders in teleneurology to address this global problem of limited access to neurological care. This model can facilitate the management of neurological diseases, impact on outcome, reduce frequency of consultations and hospitalisations, facilitate teaching of healthcare workers and promote research. 2016 BMJ Publishing Group Ltd.

  19. Defining spinal instability and methods of classification to optimise care for patients with malignant spinal cord compression: A systematic review

    Sheehan, C.

    2016-01-01

    The incidence of Malignant Spinal Cord Compression (MSCC) is thought to be increasing in the UK due to an aging population and improving cancer survivorship. The impact of such a diagnosis requires emergency treatment. In 2008 the National Institute of Clinical Excellence produced guidelines on the management of MSCC which includes a recommendation to assess spinal instability. However, a lack of guidelines to assess spinal instability in oncology patients is widely acknowledged. This can result in variations in the management of care for such patients. A spinal instability assessment can influence optimum patient care (bed rest or encouraged mobilisation) and inform the best definitive treatment modality (surgery or radiotherapy) for an individual patient. The aim of this systematic review is to attempt to identify a consensus definition of spinal instability and methods by which it can be classified. - Highlights: • A lack of guidance on metastatic spinal instability results in variations of care. • Definitions and assessments for spinal instability are explored in this review. • A Spinal Instability Neoplastic Scoring (SINS) system has been identified. • SINS could potentially be adopted to optimise and standardise patient care.

  20. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.

    Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M

    2018-03-01

    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Optimisation of small-scale hydropower using quality assurance methods - Preliminary project; Vorprojekt: Optimierung von Kleinwasserkraftwerken durch Qualitaetssicherung. Programm Kleinwasserkraftwerke

    Hofer, S.; Staubli, T.

    2006-11-15

    This comprehensive final report for the Swiss Federal Office of Energy (SFOE) presents the results of a preliminary project that examined how quality assurance methods can be used in the optimisation of small-scale hydropower projects. The aim of the project, to use existing know-how, experience and synergies, is examined. Discrepancies in quality and their effects on production prices were determined in interviews. The paper describes best-practice guidelines for the quality assurance of small-scale hydro schemes. A flow chart describes the various steps that have to be taken in the project and realisation work. Information collected from planners and from interviews made with them are presented along with further information obtained from literature. The results of interviews concerning planning work, putting to tender and the construction stages of these hydro schemes are presented and commented on. Similarly, the operational phase of such power plant is also examined, including questions on operation and guarantees. The aims of the follow-up main project - the definition of a tool and guidelines for ensuring quality - are briefly reviewed.

  2. Optimising the Design Process of the Injection Camshaft by Critical Path Method (CPM

    Olga-Ioana Amariei

    2016-10-01

    Full Text Available In the present paper a series of advantages of the CPM method are presented, focusing on the optimization of design duration of an injection camshaft, by cost criteria. The minimum duration of finalizing the design of the injection camshaft will be determined, as well as the total cost associated to this project, normally, and then under crash regime. At the end, two types of sensitivity analysis will be performed: Meeting the desire completation time and Meeting the desired budget cost

  3. Optimised method for the routine determination of Technetium-99 in environmental samples by liquid scintillation counting

    Wigley, F.; Warwick, P.E.; Croudace, I.W.; Caborn, J.; Sanchez, A.L.

    1999-01-01

    A method has been developed for the routine determination of 99 Tc in a range of environmental matrices using 99m Tc (t 1/2 =6.06 h) as an internal yield monitor. Samples are ignited stepwise to 550C and the 99 Tc is extracted from the ignited residue with 8 M nitric acid. Many contaminants are co-precipitated with Fe(OH) 3 and the Tc in the supernatant is pre-concentrated and further purified using anion exchange chromatography. Final separation of Tc from Ru is achieved by extraction of Tc into 5% tri-n-octylamine in xylene from 2 M sulphuric acid. The xylene fraction is mixed directly with a commercial liquid scintillant cocktail. The chemical yield is determined through the measurement of 99m Tc by gamma spectrometry and the 99 Tc activity is measured using liquid scintillation counting after a further two weeks to allow decay of the 99m Tc activity. Typical recoveries for this method are in the order 70-95%. The method has a detection limit of 1.7 Bq kg -1 based on a 2 h count time and a 10 g sample size. The chemical separation for 24 samples of sediment or marine biota can be completed by one analyst in a working week. A further week is required to allow the samples to decay before determination. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  4. Calculation and decomposition of spot price using interior point nonlinear optimisation methods

    Xie, K.; Song, Y.H.

    2004-01-01

    Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)

  5. Optimisation of the Method for the Quantitative Determination of Sulforaphane in Broccoli

    Yi Yi Myint; Antal Bognar; Tauscher, B.

    2002-02-01

    Consumption of vegetables, especially crucifers, reduces the risk of developing cancer. Sulforaphane [l-isothiocyanato-4- (methylsulfinyl)-butane], a compound with the ability to inhibit carcinogenesis, is one of the degradation products of glucosinolates in cruciferous vegetables. Among available extraction methods, autolysis at room temperature is the most effective for Sulforaphane extraction (relatively higher purity and better yield). The research work undertaken at Federal Research Centre for Nutrition, Institute of Biology and Chemistry, Karlsruhe, Germany was isolation of Sulforaphane based on cruciferous vegetables like Broccoli (Brassica oleracea L. Cv. italica) employing autolysis - the yield being higher. The extracted Sulforaphane compound's purity and yield were accordingly examined with gas chromatography. (author)

  6. Optimisation of the Method for the Quantitative Determination of Sulforaphane in Broccoli

    Myint, Yi Yi; Bognar, Antal; Tauscher, B

    2002-02-15

    Consumption of vegetables, especially crucifers, reduces the risk of developing cancer. Sulforaphane [l-isothiocyanato-4- (methylsulfinyl)-butane], a compound with the ability to inhibit carcinogenesis, is one of the degradation products of glucosinolates in cruciferous vegetables. Among available extraction methods, autolysis at room temperature is the most effective for Sulforaphane extraction (relatively higher purity and better yield). The research work undertaken at Federal Research Centre for Nutrition, Institute of Biology and Chemistry, Karlsruhe, Germany was isolation of Sulforaphane based on cruciferous vegetables like Broccoli (Brassica oleracea L. Cv. italica) employing autolysis - the yield being higher. The extracted Sulforaphane compound's purity and yield were accordingly examined with gas chromatography. (author)

  7. Methods of optimising ion beam induced charge collection of polycrystalline silicon photovoltaic cells

    Witham, L.C.G.; Jamieson, D.N.; Bardos, R.A.

    1998-01-01

    Ion Beam Induced Charge (IBIC) is a valuable method for the mapping of charge carrier transport and recombination in silicon solar cells. However performing IBIC analysis of polycrystalline silicon solar cells is problematic in a manner unlike previous uses of IBIC on silicon-based electronic devices. Typical solar cells have a surface area of several square centimeters and a p-n junction thickness of only few microns. This means the cell has a large junction capacitance in the many nanoFarads range which leads to a large amount of noise on the preamplifier inputs which typically swamps the transient IBIC signal. The normal method of improving the signal-to-noise (S/N) ratio by biasing the junction is impractical for these cells as the low-quality silicon used leads to a large leakage current across the device. We present several experimental techniques which improve the S/N ratio which when used together should make IBIC analysis of many low crystalline quality devices a viable and reliable procedure. (authors)

  8. Simulation by the method of inverse cumulative distribution function applied in optimising of foundry plant production

    J. Szymszal

    2009-01-01

    Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.

  9. A new Dobson Umkehr ozone profile retrieval method optimising information content and resolution

    Stone, K.; Tully, M. B.; Rhodes, S. K.; Schofield, R.

    2015-03-01

    The standard Dobson Umkehr methodology to retrieve coarse-resolution ozone profiles used by the National Oceanographic and Atmospheric Administration uses designated solar zenith angles (SZAs). However, some information may be lost if measurements lie outside the designated SZA range (between 60° and 90°), or do not conform to the fitting technique. Also, while Umkehr measurements can be taken using multiple wavelength pairs (A, C and D), past retrieval methods have focused on a single pair (C). Here we present an Umkehr inversion method that uses measurements at all SZAs (termed operational) and all wavelength pairs. (Although, we caution direct comparison to other algorithms.) Information content for a Melbourne, Australia (38° S, 145° E) Umkehr measurement case study from 28 January 1994, with SZA range similar to that designated in previous algorithms is shown. When comparing the typical single wavelength pair with designated SZAs to the operational measurements, the total degrees of freedom (independent pieces of information) increases from 3.1 to 3.4, with the majority of the information gain originating from Umkehr layers 2 + 3 and 4 (10-20 km and 25-30 km respectively). In addition to this, using all available wavelength pairs increases the total degrees of freedom to 5.2, with the most significant increases in Umkehr layers 2 + 3 to 7 and 9+ (10-40 and 45-80 km). Investigating a case from 13 April 1970 where the measurements extend beyond the 90° SZA range gives further information gain, with total degrees of freedom extending to 6.5. Similar increases are seen in the information content. Comparing the retrieved Melbourne Umkehr time series with ozonesondes shows excellent agreement in layers 2 + 3 and 4 (10-20 and 25-30 km) for both C and A + C + D-pairs. Retrievals in layers 5 and 6 (25-30 and 30-35 km) consistently show lower ozone partial column compared to ozonesondes. This is likely due to stray light effects that are not accounted for in the

  10. Daylight Design of Office Buildings: Optimisation of External Solar Shadings by Using Combined Simulation Methods

    Javier González

    2015-05-01

    Full Text Available Integrating daylight and energy performance with optimization into the design process has always been a challenge for designers. Most of the building environmental performance simulation tools require a considerable amount of time and iterations for achieving accurate results. Moreover the combination of daylight and energy performances has always been an issue, as different software packages are needed to perform detailed calculations. A simplified method to overcome both issues using recent advances in software integration is explored here. As a case study; the optimization of external shadings in a typical office space in Australia is presented. Results are compared against common solutions adopted as industry standard practices. Visual comfort and energy efficiency are analysed in an integrated approach. The DIVA (Design, Iterate, Validate and Adapt plug-in for Rhinoceros/Grasshopper software is used as the main tool, given its ability to effectively calculate daylight metrics (using the Radiance/Daysim engine and energy consumption (using the EnergyPlus engine. The optimization process is carried out parametrically controlling the shadings’ geometries. Genetic Algorithms (GA embedded in the evolutionary solver Galapagos are adopted in order to achieve close to optimum results by controlling iteration parameters. The optimized result, in comparison with conventional design techniques, reveals significant enhancement of comfort levels and energy efficiency. Benefits and drawbacks of the proposed strategy are then discussed.

  11. Detection of crack-like indications in digital radiography by global optimisation of a probabilistic estimation function

    Alekseychuk, O.

    2006-07-01

    A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR {proportional_to} 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and tested on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors.

  12. Detection of crack-like indications in digital radiography by global optimisation of a probabilistic estimation function

    Alekseychuk, O.

    2006-01-01

    A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR ∝ 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and tested on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors

  13. A practical method to standardise and optimise the Philips DoseRight 2.0 CT automatic exposure control system.

    Wood, T J; Moore, C S; Stephens, A; Saunderson, J R; Beavis, A W

    2015-09-01

    Given the increasing use of computed tomography (CT) in the UK over the last 30 years, it is essential to ensure that all imaging protocols are optimised to keep radiation doses as low as reasonably practicable, consistent with the intended clinical task. However, the complexity of modern CT equipment can make this task difficult to achieve in practice. Recent results of local patient dose audits have shown discrepancies between two Philips CT scanners that use the DoseRight 2.0 automatic exposure control (AEC) system in the 'automatic' mode of operation. The use of this system can result in drifting dose and image quality performance over time as it is designed to evolve based on operator technique. The purpose of this study was to develop a practical technique for configuring examination protocols on four CT scanners that use the DoseRight 2.0 AEC system in the 'manual' mode of operation. This method used a uniform phantom to generate reference images which form the basis for how the AEC system calculates exposure factors for any given patient. The results of this study have demonstrated excellent agreement in the configuration of the CT scanners in terms of average patient dose and image quality when using this technique. This work highlights the importance of CT protocol harmonisation in a modern Radiology department to ensure both consistent image quality and radiation dose. Following this study, the average radiation dose for a range of CT examinations has been reduced without any negative impact on clinical image quality.

  14. An Optimised Method to Determine PAHs in a Contaminated Soil; Metodo Optimizado para la Determinacion de PAHs en un Suelo Contaminado

    Garcia Alonso, S.; Perez Pastor, R. M.; Sevillano castano, M. L.; Escolano Segovia, O.; Garcia Frutos, F. J.

    2007-07-20

    An analytical study is presented based on an optimised method to determine selected polycyclic aromatic hydrocarbons (PAHs) by High Performance Liquid Chromatography (HPLC) with fluorescence detection. The work was focused to obtain reliable measurements of PAH in a gas work contaminated soil and was performed in the frame of the project 'Assessment of natural remediation technologies for PAHs in contaminated soils' (Spanish Plan Nacional l+D+i, CTM 2004-05832-CO2-01): First assays were focused to evaluate an initial proposed procedure by sonication extraction in the contaminated soil. Afterwards to extend the efficiency and reduce solvent and time consuming of extraction procedures, the more relevant parameters that affect the extraction step were investigated. A comparison between sonication and microwave procedures was done, and the influence of sample grinding was studied. In general, both extraction techniques led on comparable results, although sonication procedure needs to be more carefully optimised. Finally, as a final application of the optimised method, the effect of particle size on relative distribution of selected PAHs in the contaminated soil was investigated. Relative abundance of more volatile PAHs showed a decreasing according to lower grain size, while relative abundance of less volatile compounds indicated an increasing of concentration levels for lower grain size. (Author) 10 refs.

  15. Particle swarm optimisation classical and quantum perspectives

    Sun, Jun; Wu, Xiao-Jun

    2016-01-01

    IntroductionOptimisation Problems and Optimisation MethodsRandom Search TechniquesMetaheuristic MethodsSwarm IntelligenceParticle Swarm OptimisationOverviewMotivationsPSO Algorithm: Basic Concepts and the ProcedureParadigm: How to Use PSO to Solve Optimisation ProblemsSome Harder Examples Some Variants of Particle Swarm Optimisation Why Does the PSO Algorithm Need to Be Improved? Inertia and Constriction-Acceleration Techniques for PSOLocal Best ModelProbabilistic AlgorithmsOther Variants of PSO Quantum-Behaved Particle Swarm Optimisation OverviewMotivation: From Classical Dynamics to Quantum MechanicsQuantum Model: Fundamentals of QPSOQPSO AlgorithmSome Essential ApplicationsSome Variants of QPSOSummary Advanced Topics Behaviour Analysis of Individual ParticlesConvergence Analysis of the AlgorithmTime Complexity and Rate of ConvergenceParameter Selection and PerformanceSummaryIndustrial Applications Inverse Problems for Partial Differential EquationsInverse Problems for Non-Linear Dynamical SystemsOptimal De...

  16. An Optimisation Approach for Room Acoustics Design

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  17. Evolutionary programming for neutron instrument optimisation

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  18. Evolutionary programming for neutron instrument optimisation

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  19. Characterisation and optimisation of a sample preparation method for the detection and quantification of atmospherically relevant carbonyl compounds in aqueous medium

    Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.

    2015-06-01

    Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds. Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). With the present method quantification can be carried out for each carbonyl compound originating from fog, cloud and rain or sampled from the gas- and particle phase in water. Detection limits between 0.01 and 0.17 μmol L-1 were found, depending on carbonyl compounds. Furthermore, best results were found for the derivatisation with a PFBHA concentration of 0.43 mg mL-1 for 24 h followed by a subsequent extraction with dichloromethane for 30 min at pH = 1. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione after a reaction time of 5 h.

  20. Global warming and climate change: control methods

    Laal, M.; Aliramaie, A.

    2008-01-01

    This paper aimed at finding causes of global warming and ways to bring it under control. Data based on scientific opinion as given by synthesis reports of news, articles, web sites, and books. global warming is the observed and projected increases in average temperature of Earth's atmosphere and oceans. Carbon dioxide and other air pollution that is collecting in the atmosphere like a thickening blanket, trapping the sun's heat and causing the planet to warm up. Pollution is one of the biggest man-made problems. Burning fossil fuels is the main factor of pollution. As average temperature increases, habitats, species and people are threatened by drought, changes in rainfall, altered seasons, and more violent storms and floods. Indeed the life cycle of nuclear power results in relatively little pollution. Energy efficiency, solar, wind and other renewable fuels are other weapons against global warming . Human activity, primarily burning fossil fuels, is the major driving factor in global warming . Curtailing the release of carbon dioxide into the atmosphere by reducing use of oil, gasoline, coal and employment of alternate energy, sources are the tools for keeping global warming under control. global warming can be slowed and stopped, with practical actions thal yield a cleaner, healthier atmosphere

  1. Multi-Optimisation Consensus Clustering

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  2. An optimised method for the extraction of bacterial mRNA from plant roots infected with Escherichia coli O157:H7

    Ashleigh eHolmes

    2014-06-01

    Full Text Available Analysis of microbial gene expression during host colonisation provides valuable information on the nature of interaction, beneficial or pathogenic, and the adaptive processes involved. Isolation of bacterial mRNA for in planta analysis can be challenging where host nucleic acid may dominate the preparation, or inhibitory compounds affect downstream analysis, e.g. qPCR, microarray or RNA-seq. The goal of this work was to optimise the isolation of bacterial mRNA of food-borne pathogens from living plants. Reported methods for recovery of phytopathogen-infected plant material, using hot phenol extraction and high concentration of bacterial inoculation or large amounts of infected tissues, were found to be inappropriate for plant roots inoculated with Escherichia coli O157:H7. The bacterial RNA yields were too low and increased plant material resulted in a dominance of plant RNA in the sample. To improve the yield of bacterial RNA and reduce the number of plants required, an optimised method was developed which combines bead beating with directed bacterial lysis using SDS and lysozyme. Inhibitory plant compounds, such as phenolics and polysaccharides, were counteracted with the addition of HMW-PEG and CTAB. The new method increased the total yield of bacterial mRNA substantially and allowed assessment of gene expression by qPCR. This method can be applied to other bacterial species associated with plant roots, and also in the wider context of food safety.

  3. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  4. An improvement to the ligand optimisation method (LOM) for measuring the apparent dissociation constant and ligand purity in Ca2+ and Mg2+ buffer solutions.

    McGuigan, John A S; Kay, James W; Elder, Hugh Y

    2014-01-01

    In Ca(2+)/Mg(2+) buffers the calculated ionised concentrations ([X(2+)]) can vary by up to a factor of seven. Since there are no defined standards it is impossible to check calculated [X(2+)], making measurement essential. The ligand optimisation method (LOM) is an accurate method to measure [X(2+)] in Ca(2+)/Mg(2+) buffers; independent estimation of ligand purity extends the method to pK(/) buffers, to calculate electrode and buffer characteristics as a function of Σ. Ca(2+)-electrodes have a Σ buffers. These results demonstrated that it is pK(/) that is normally distributed. Until defined standards are available, [X(2+)] in Ca(2+)/Mg(2+) buffers have to be measured. The most appropriate method is to use Ca(2+)/Mg(2) electrodes combined with the Excel programs SALE or AEC. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Travelling Methods: Tracing the Globalization of Qualitative Communication Research

    Bryan C. Taylor

    2016-05-01

    Full Text Available Existing discussion of the relationships between globalization, communication research, and qualitative methods emphasizes two images: the challenges posed by globalization to existing communication theory and research methods, and the impact of post-colonial politics and ethics on qualitative research. We draw in this paper on a third image – qualitative research methods as artifacts of globalization – to explore the globalization of qualitative communication research methods. Following a review of literature which tentatively models this process, we discuss two case studies of qualitative research in the disciplinary subfields of intercultural communication and media audience studies. These cases elaborate the forces which influence the articulation of national, disciplinary, and methodological identities which mediate the globalization of qualitative communication research methods.

  6. An effective approach to reducing strategy space for maintenance optimisation of multistate series–parallel systems

    Zhou, Yifan; Lin, Tian Ran; Sun, Yong; Bian, Yangqing; Ma, Lin

    2015-01-01

    Maintenance optimisation of series–parallel systems is a research topic of practical significance. Nevertheless, a cost-effective maintenance strategy is difficult to obtain due to the large strategy space for maintenance optimisation of such systems. The heuristic algorithm is often employed to deal with this problem. However, the solution obtained by the heuristic algorithm is not always the global optimum and the algorithm itself can be very time consuming. An alternative method based on linear programming is thus developed in this paper to overcome such difficulties by reducing strategy space of maintenance optimisation. A theoretical proof is provided in the paper to verify that the proposed method is at least as effective as the existing methods for strategy space reduction. Numerical examples for maintenance optimisation of series–parallel systems having multistate components and considering both economic dependence among components and multiple-level imperfect maintenance are also presented. The simulation results confirm that the proposed method is more effective than the existing methods in removing inappropriate maintenance strategies of multistate series–parallel systems. - Highlights: • A new method using linear programming is developed to reduce the strategy space. • The effectiveness of the new method for strategy reduction is theoretically proved. • Imperfect maintenance and economic dependence are considered during optimisation

  7. Design of optimised backstepping controller for the synchronisation ...

    Ehsan Fouladi

    2017-12-18

    Dec 18, 2017 ... for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller. Keywords. Colpitts oscillator; backstepping controller; chaos synchronisation; shark smell algorithm; particle .... The velocity model is based on the gradient of the objective function, tilting ...

  8. Efficient topology optimisation of multiscale and multiphysics problems

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...

  9. The method of global learning in teaching foreign languages

    Tatjana Dragovič

    2001-12-01

    Full Text Available The authors describe the method of global learning of foreign languages, which is based on the principles of neurolinguistic programming (NLP. According to this theory, the educator should use the method of the so-called periphery learning, where students learn relaxation techniques and at the same time they »incidentally « or subconsciously learn a foreign language. The method of global learning imitates successful strategies of learning in early childhood and therefore creates a relaxed attitude towards learning. Global learning is also compared with standard methods.

  10. Time since discharge of 9mm cartridges by headspace analysis, part 1: Comprehensive optimisation and validation of a headspace sorptive extraction (HSSE) method.

    Gallidabino, M; Romolo, F S; Weyermann, C

    2017-03-01

    Estimating the time since discharge of spent cartridges can be a valuable tool in the forensic investigation of firearm-related crimes. To reach this aim, it was previously proposed that the decrease of volatile organic compounds released during discharge is monitored over time using non-destructive headspace extraction techniques. While promising results were obtained for large-calibre cartridges (e.g., shotgun shells), handgun calibres yielded unsatisfying results. In addition to the natural complexity of the specimen itself, these can also be attributed to some selective choices in the methods development. Thus, the present series of paper aimed to more systematically evaluate the potential of headspace analysis to estimate the time since discharge of cartridges through the use of more comprehensive analytical and interpretative techniques. Specifically, in this first part, a method based on headspace sorptive extraction (HSSE) was comprehensively optimised and validated, as the latter recently proved to be a more efficient alternative than previous approaches. For this purpose, 29 volatile organic compounds were preliminary selected on the basis of previous works. A multivariate statistical approach based on design of experiments (DOE) was used to optimise variables potentially involved in interaction effects. Introduction of deuterated analogues in sampling vials was also investigated as strategy to account for analytical variations. Analysis was carried out by selected ion mode, gas chromatography coupled to mass spectrometry (GC-MS). Results showed good chromatographic resolution as well as detection limits and peak area repeatability. Application to 9mm spent cartridges confirmed that the use of co-extracted internal standards allowed for improved reproducibility of the measured signals. The validated method will be applied in the second part of this work to estimate the time since discharge of 9mm spent cartridges using multivariate models. Copyright

  11. How to apply the Score-Function method to standard discrete event simulation tools in order to optimise a set of system parameters simultaneously: A Job-Shop example will be discussed

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ...

  12. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  13. Global/local methods for probabilistic structural analysis

    Millwater, H. R.; Wu, Y.-T.

    1993-04-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  14. Methodological principles for optimising functional MRI experiments

    Wuestenberg, T.; Giesel, F.L.; Strasburger, H.

    2005-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most common methods for localising neuronal activity in the brain. Even though the sensitivity of fMRI is comparatively low, the optimisation of certain experimental parameters allows obtaining reliable results. In this article, approaches for optimising the experimental design, imaging parameters and analytic strategies will be discussed. Clinical neuroscientists and interested physicians will receive practical rules of thumb for improving the efficiency of brain imaging experiments. (orig.) [de

  15. Optimised Renormalisation Group Flows

    Litim, Daniel F

    2001-01-01

    Exact renormalisation group (ERG) flows interpolate between a microscopic or classical theory and the corresponding macroscopic or quantum effective theory. For most problems of physical interest, the efficiency of the ERG is constrained due to unavoidable approximations. Approximate solutions of ERG flows depend spuriously on the regularisation scheme which is determined by a regulator function. This is similar to the spurious dependence on the ultraviolet regularisation known from perturbative QCD. Providing a good control over approximated ERG flows is at the root for reliable physical predictions. We explain why the convergence of approximate solutions towards the physical theory is optimised by appropriate choices of the regulator. We study specific optimised regulators for bosonic and fermionic fields and compare the optimised ERG flows with generic ones. This is done up to second order in the derivative expansion at both vanishing and non-vanishing temperature. An optimised flow for a ``proper-time ren...

  16. Balanced the Trade-offs problem of ANFIS Using Particle Swarm Optimisation

    Dian Palupi Rini

    2013-11-01

    Full Text Available Improving the approximation accuracy and interpretability of fuzzy systems is an important issue either in fuzzy systems theory or in its applications . It is known that simultaneous optimisation both issues was the trade-offs problem, but it will improve performance of the system and avoid overtraining of data. Particle swarm optimisation (PSO is part of evolutionary algorithm that is good candidate algorithms to solve multiple optimal solution and better global search space. This paper introduces an integration of PSO dan ANFIS for optimise its learning especially for tuning membership function parameters and finding the optimal rule for better classification. The proposed method has been tested on four standard dataset from UCI machine learning i.e. Iris Flower, Habermans Survival Data, Balloon and Thyroid dataset. The results have shown better classification using the proposed PSO-ANFIS and the time complexity has reduced accordingly.

  17. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  18. Numerical flow simulation methods and additive manufacturing methods for the development of a flow optimised design of a novel point-of-care diagnostic device

    Dethloff Manuel

    2017-09-01

    Full Text Available For the development of a novel, user-friendly and low cost point-of-care diagnostic device for the detection of disease specific biomarker a flow optimised design of the test system has to be investigated. The resulting test system is characterised by a reduced execution period, a reduction of execution steps and an integrated waste management. Based on previous results, the current study focused on the design implementation of the fluidic requirements, e. g. tightness, inside the test device. With the help of fluid flow simulations (CFD – computational fluid dynamics the flow behaviour inside the test device was analysed for different designs and arrangements. Prototypes generated from additive manufacturing technologies (PolyJet modeling are used for validating the simulation results and further experimental tests.

  19. Optimisation of load control

    Koponen, P.

    1998-01-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  20. Optimisation of load control

    Koponen, P [VTT Energy, Espoo (Finland)

    1998-08-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  1. Multicriteria Optimisation in Logistics Forwarder Activities

    Tanja Poletan Jugović

    2007-05-01

    Full Text Available Logistics forwarder, as organizer and planner of coordinationand integration of all the transport and logistics chains elements,uses adequate ways and methods in the process of planningand decision-making. One of these methods, analysed inthis paper, which could be used in optimisation of transportand logistics processes and activities of logistics forwarder, isthe multicriteria optimisation method. Using that method, inthis paper is suggested model of multicriteria optimisation of logisticsforwarder activities. The suggested model of optimisationis justified in keeping with method principles of multicriteriaoptimization, which is included in operation researchmethods and it represents the process of multicriteria optimizationof variants. Among many different processes of multicriteriaoptimization, PROMETHEE (Preference Ranking OrganizationMethod for Enrichment Evaluations and Promcalc& Gaia V. 3.2., computer program of multicriteria programming,which is based on the mentioned process, were used.

  2. Methods of Optimisation of the Structure of the Dealing Bank with a Limited Base of Counter-Agents

    Novak Sergіy M.

    2013-11-01

    Full Text Available The article considers methods of assessment of optimal parameters of the dealing bank service with a limited base of counter-agents. The methods are based on the mathematical model of micro-structure of the inter-bank currency market. The key parameters of the infrastructure of the dealing service within the framework of the model are: number of authorised traders, contingent of counter-agents, quotation policy, main parameters of the currency market – spread and volatility of quotations and the resulting indicators of efficiency of the dealing service – profit and probability of breakeven operation. The methods allow identification of optimal parameters of the infrastructure of the dealing bank service based on indicators of dynamics of currency risks and market environment of the bank. On the basis of the developed mathematical model the article develops methods of planning calculations of parameters of the infrastructure of the dealing bank service, which are required for ensuring a necessary level of efficiency with set parameters of the currency market. Application of the said methods gives a possibility to assess indicators of operation of the bank’s front office depending on its scale.

  3. Optimisation: how to develop stake holder involvement

    Weiss, W.

    2003-01-01

    The Precautionary Principle is an internationally recognised approach for dealing with risk situations characterised by uncertainties and potential irreversible damages. Since the late fifties, ICRP has adopted this prudent attitude because of the lack of scientific evidence concerning the existence of a threshold at low doses for stochastic effects. The 'linear, no-threshold' model and the 'optimisation of protection' principle have been developed as a pragmatic response for the management of the risk. The progress in epidemiology and radiobiology over the last decades have affirmed the initial assumption and the optimisation remains the appropriate response for the application of the precautionary principle in the context of radiological protection. The basic objective of optimisation is, for any source within the system of radiological protection, to maintain the level of exposure as low as reasonably achievable, taking into account social and economical factors. Methods tools and procedures have been developed over the last two decades to put into practice the optimisation principle with a central role given to the cost-benefit analysis as a means to determine the optimised level of protection. However, with the advancement in the implementation of the principle more emphasis was progressively given to good practice, as well as on the importance of controlling individual levels of exposure through the optimisation process. In the context of the revision of its present recommendations, the Commission is reenforcing the emphasis on protection of the individual with the adoption of an equity-based system that recognizes individual rights and a basic level of health protection. Another advancement is the role that is now recognised to 'stakeholders involvement' in the optimisation process as a mean to improve the quality of the decision aiding process for identifying and selecting protection actions considered as being accepted by all those involved. The paper

  4. Dose optimisation in single plane interstitial brachytherapy

    Tanderup, Kari; Hellebust, Taran Paulsen; Honoré, Henriette Benedicte

    2006-01-01

    patients,       treated for recurrent rectal and cervical cancer, flexible catheters were       sutured intra-operatively to the tumour bed in areas with compromised       surgical margin. Both non-optimised, geometrically and graphically       optimised CT -based dose plans were made. The overdose index...... on the       regularity of the implant, such that the benefit of optimisation was       larger for irregular implants. OI and HI correlated strongly with target       volume limiting the usability of these parameters for comparison of dose       plans between patients. CONCLUSIONS: Dwell time optimisation significantly......BACKGROUND AND PURPOSE: Brachytherapy dose distributions can be optimised       by modulation of source dwell times. In this study dose optimisation in       single planar interstitial implants was evaluated in order to quantify the       potential benefit in patients. MATERIAL AND METHODS: In 14...

  5. An Optimal Method for Developing Global Supply Chain Management System

    Hao-Chun Lu

    2013-01-01

    Full Text Available Owing to the transparency in supply chains, enhancing competitiveness of industries becomes a vital factor. Therefore, many developing countries look for a possible method to save costs. In this point of view, this study deals with the complicated liberalization policies in the global supply chain management system and proposes a mathematical model via the flow-control constraints, which are utilized to cope with the bonded warehouses for obtaining maximal profits. Numerical experiments illustrate that the proposed model can be effectively solved to obtain the optimal profits in the global supply chain environment.

  6. Optimal reactive power and voltage control in distribution networks with distributed generators by fuzzy adaptive hybrid particle swarm optimisation method

    Chen, Shuheng; Hu, Weihao; Su, Chi

    2015-01-01

    A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...

  7. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  8. A general first-order global sensitivity analysis method

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  9. Method for a component-based economic optimisation in design of whole building renovation versus demolishing and rebuilding

    Morelli, Martin; Harrestrup, Maria; Svendsen, Svend

    2014-01-01

    investing in retrofit measures and buying renewable energy. The overall cost of the renovation considers the market value of the property, the investment in the renovation, the operational and maintenance costs. A multi-family building is used as an example to clearly illustrate the application...... of the method from macroeconomic and private financial perspectives. Conclusion: The example shows that the investment cost and future market value of the building are the dominant factors in deciding whether to renovate an existing building or to demolish it and thereafter erect a new building. Additionally...

  10. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function

    Michael Pearce

    2018-02-01

    Full Text Available Abstract Background Most confirmatory randomised controlled clinical trials (RCTs are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. Methods We considered the use of a decision-theoretic value of information (VOI method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. Results The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Conclusions Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently

  11. The Direct Lighting Computation in Global Illumination Methods

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  12. Mechatronic System Design Based On An Optimisation Approach

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael Rygaard

    The envisaged objective of this paper project is to extend the current state of the art regarding the design of complex mechatronic systems utilizing an optimisation approach. We propose to investigate a novel framework for mechatronic system design. The novelty and originality being the use...... of optimisation techniques. The methods used to optimise/design within the classical disciplines will be identified and extended to mechatronic system design....

  13. Analysis and optimisation of heterogeneous real-time embedded systems

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  14. Analysis and optimisation of heterogeneous real-time embedded systems

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  15. Optimising Magnetostatic Assemblies

    Insinga, Andrea Roberto; Smith, Anders

    theorem. This theorem formulates an energy equivalence principle with several implications concerning the optimisation of objective functionals that are linear with respect to the magnetic field. Linear functionals represent different optimisation goals, e.g. maximising a certain component of the field...... approached employing a heuristic algorithm, which led to new design concepts. Some of the procedures developed for linear objective functionals have been extended to non-linear objectives, by employing iterative techniques. Even though most the optimality results discussed in this work have been derived...

  16. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  17. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for

  18. From the patient to the clinical mycology laboratory: how can we optimise microscopy and culture methods for mould identification?

    Vyzantiadis, Timoleon-Achilleas A; Johnson, Elizabeth M; Kibbler, Christopher C

    2012-06-01

    The identification of fungi relies mainly on morphological criteria. However, there is a need for robust and definitive phenotypic identification procedures in order to evaluate continuously evolving molecular methods. For the future, there is an emerging consensus that a combined (phenotypic and molecular) approach is more powerful for fungal identification, especially for moulds. Most of the procedures used for phenotypic identification are based on experience rather than comparative studies of effectiveness or performance and there is a need for standardisation among mycology laboratories. This review summarises and evaluates the evidence for the major existing phenotypic identification procedures for the predominant causes of opportunistic mould infection. We have concentrated mainly on Aspergillus, Fusarium and mucoraceous mould species, as these are the most important clinically and the ones for which there are the most molecular taxonomic data.

  19. Proceedings of the Workshop on Current and Emerging methods for Optimising Safety and Efficiency in Nuclear Decommissioning

    2017-02-01

    The workshop was organised by the Institute for Energy Technology (IFE) on behalf of the OECD Halden Reactor Project (OECD-HRP) and in collaboration with the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency (NEA). The workshop brought together more than 100 people (operators, regulators, scientists, consultants, and contractors) from 25 countries. Program: Day 1 - Successful application of R and D in decommissioning and future needs (Welcome and Opening Speeches; Session 1: Workshop Introductory Presentations; Session 2: Experience from starting, on-going and completed decommissioning projects). Day 2 - R and D and application of advanced technologies for decommissioning (Session 3: New technologies for decommissioning; Session 4: Advanced information technologies for decommissioning). Day 3 - Improving decommissioning management on project, national and international level (Session 5: Challenges and methods for improving decommissioning; Session 6: Workshop closing)

  20. Optimisation-based worst-case analysis and anti-windup synthesis for uncertain nonlinear systems

    Menon, Prathyush Purushothama

    This thesis describes the development and application of optimisation-based methods for worst-case analysis and anti-windup synthesis for uncertain nonlinear systems. The worst-case analysis methods developed in the thesis are applied to the problem of nonlinear flight control law clearance for highly augmented aircraft. Local, global and hybrid optimisation algorithms are employed to evaluate worst-case violations of a nonlinear response clearance criterion, for a highly realistic aircraft simulation model and flight control law. The reliability and computational overheads associated with different opti misation algorithms are compared, and the capability of optimisation-based approaches to clear flight control laws over continuous regions of the flight envelope is demonstrated. An optimisation-based method for computing worst-case pilot inputs is also developed, and compared with current industrial approaches for this problem. The importance of explicitly considering uncertainty in aircraft parameters when computing worst-case pilot demands is clearly demonstrated. Preliminary results on extending the proposed framework to the problems of limit-cycle analysis and robustness analysis in the pres ence of time-varying uncertainties are also included. A new method for the design of anti-windup compensators for nonlinear constrained systems controlled using nonlinear dynamics inversion control schemes is presented and successfully applied to some simple examples. An algorithm based on the use of global optimisation is proposed to design the anti-windup compensator. Some conclusions are drawn from the results of the research presented in the thesis, and directions for future work are identified.

  1. Recovery of microbial community structure and functioning after wildfire in semi-arid environments: optimising methods for monitoring and assessment

    Muñoz-Rojas, Miriam; Martini, Dylan; Erickson, Todd; Merritt, David; Dixon, Kingsley

    2015-04-01

    Introduction In semi-arid areas such as northern Western Australia, wildfires are a natural part of the environment and many ecosystems in these landscapes have evolved and developed a strong relationship with fire. Soil microbial communities play a crucial role in ecosystem processes by regulating the cycling of nutrients via decomposition, mineralization, and immobilization processes. Thus, the structure (e.g. soil microbial biomass) and functioning (e.g. soil microbial activity) of microbial communities, as well as their changes after ecosystem disturbance, can be useful indicators of soil quality and health recovery. In this research, we assess the impacts of fire on soil microbial communities and their recovery in a biodiverse semi-arid environment of Western Australia (Pilbara region). New methods for determining soil microbial respiration as an indicator of microbial activity and soil health are also tested. Methodology Soil samples were collected from 10 similar ecosystems in the Pilbara with analogous native vegetation, but differing levels of post-fire disturbance (i.e. 3 months, 1 year, 5, 7 and 14 years after wildfire). Soil microbial activity was measured with the Solvita test which determines soil microbial respiration rate based on the measurement of the CO2 burst of a dry soil after it is moistened. Soils were dried and re-wetted and a CO2 probe was inserted before incubation at constant conditions of 25°C during 24 h. Measurements were taken with a digital mini spectrometer. Microbial (bacteria and fungi) biomass and community composition were measured by phospholipid fatty acid analysis (PLFA). Results Immediately after the fire (i.e. 3 months), soil microbial activity and microbial biomass are similar to 14 years 'undisturbed' levels (53.18±3.68 ppm CO2-CO and 14.07±0.65 mg kg-1, respectively). However, after the first year post-fire, with larger plant productivity, microbial biomass and microbial activity increase rapidly, peaking after 5

  2. Optimising methods of red cell sedimentation from cord blood to maximise nucleated cell recovery prior to cryopreservation.

    Madkaikar, M; Gupta, M; Ghosh, K; Swaminathan, S; Sonawane, L; Mohanty, D

    2007-01-01

    Human cord blood is now an established source of stem cells for haematopoietic reconstitution. Red blood cell (RBC) depletion is required to reduce the cord blood unit volume for commercial banking. Red cell sedimentation using hydroxy ethyl starch (HES) is a standard procedure in most cord blood banks. However, while standardising the procedure for cord blood banking, a significant loss of nucleated cells (NC) may be encountered during standard HES sedimentation protocols. This study compares four procedures for cord blood processing to obtain optimal yield of nucleated cells. Gelatin, dextran, 6% HES and 6% HES with an equal volume of phosphate-buffered saline (PBS) were compared for RBC depletion and NC recovery. Dilution of the cord blood unit with an equal volume of PBS prior to sedimentation with HES resulted in maximum NC recovery (99% [99.5 +/- 1.3%]). Although standard procedures using 6% HES are well established in Western countries, they may not be applicable in India, as a variety of factors that can affect RBC sedimentation (e.g., iron deficiency, hypoalbuminaemia, thalassaemia trait, etc.) may reduce RBC sedimentation and thus reduce NC recovery. While diluting cord blood with an equal volume of PBS is a simple method to improve the NC recovery, it does involve an additional processing step.

  3. Optimisation and validation of analytical methods for the simultaneous extraction of antioxidants: application to the analysis of tomato sauces.

    Motilva, Maria-José; Macià, Alba; Romero, Maria-Paz; Labrador, Agustín; Domínguez, Alba; Peiró, Lluís

    2014-11-15

    In the present study, simultaneous extraction of natural antioxidants (phenols and carotenoids) in complex matrices, such as tomato sauces, is presented. The tomato sauce antioxidant compounds studied were the phenolics hydroxytyrosol, from virgin olive oil, quercetin and its derivatives, from onions, and quercetin-rutinoside as well as the carotenoid, lycopene (cis and trans), from tomatoes. These antioxidant compounds were extracted simultaneously with n-hexane/acetone/ethanol (50/25/25, v/v/v). The phenolics were analysed by ultra-performance liquid chromatography coupled with tandem mass spectrometry (UPLC-MS/MS), and lycopene (cis- and trans-forms) was analysed using high-performance liquid chromatography coupled to a diode array detector (HPLC-DAD). After studying the parameters of these methods, they were applied to the analysis of virgin olive oil, fresh onion, tomato concentrate and tomato powder, and commercial five tomato sauces. Subsequently, the results obtained in our laboratory were compared with those from the Gallina Blanca Star Group laboratory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. THE METHOD OF GLOBAL READING FROM AN INTERDISCIPLINARY PERSPECTIVE

    Jasmina Delcheva Dizdarevikj

    2018-04-01

    Full Text Available Primary literacy in Macedonian education is in decline. This assertion has been proved both by the abstract theory, and by the concrete empirical data. Educational reforms in the national curriculum are on their way, and the implementation of the method of global reading is one of the main innovations. Misunderstanding of this method has led it its being criticized as a foreign import and as unnatural and incongruous for the specificities of the Macedonian language. We think that this argument is wrong. That is why this paper is going to extrapolate and explain the method of global learning and its basis in pedagogy, philosophy, psychology, anthropology and linguistics. The main premise of this paper is the relation of the part to the whole, understood from the different perspectives of philosophy, psychology, linguistics and anthropology. The theories of Kant, Cassirer, Bruner, Benveniste and Geertz are going to be considered in the context of the part – whole problem, by themselves, and also in their relation to the method of global reading.

  5. Modified cuckoo search: A new gradient free optimisation algorithm

    Walton, S.; Hassan, O.; Morgan, K.; Brown, M.R.

    2011-01-01

    Highlights: → Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. → MCS shows a high convergence rate, able to outperform other optimisers. → MCS is particularly strong at high dimension objective functions. → MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  6. Optimisation Methods for Cam Mechanisms

    Claudia–Mari Popa

    2010-01-01

    Full Text Available In this paper we present the criteria which represent the base of optimizing the cam mechanisms and also we perform the calculations for several types of mechanisms. We study the influence of the constructive parameters in case of the simple machines with rotation cam and follower (flat or curve of translation on the curvature radius and that of the transmission angle. As it follows, we present the optimization calculations of the cam and flat rotation follower mechanisms, as well as the calculations for optimizing the cam mechanisms by circular groove followers’ help. For an easier interpretation of the results, we have visualized the obtained cam in AutoCAD according to the script files generated by a calculation program.

  7. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  8. Optimisation of efficiency of axial fans

    Kruyt, Nicolaas P.; Pennings, P.C.; Faasen, R.

    2014-01-01

    A three-stage research project has been executed to develop ducted axial-fans with increased efficiency. In the first stage a design method has been developed in which various conflicting design criteria can be incorporated. Based on this design method, an optimised design has been determined

  9. Lattice Boltzmann methods for global linear instability analysis

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  10. Solving dynamic multi-objective problems with vector evaluated particle swarm optimisation

    Greeff, M

    2008-06-01

    Full Text Available Many optimisation problems are multi-objective and change dynamically. Many methods use a weighted average approach to the multiple objectives. This paper introduces the usage of the vector evaluated particle swarm optimiser (VEPSO) to solve dynamic...

  11. Global positioning method based on polarized light compass system

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  12. Methods of Increasing the Performance of Radionuclide Generators Used in Nuclear Medicine: Daughter Nuclide Build-Up Optimisation, Elution-Purification-Concentration Integration, and Effective Control of Radionuclidic Purity

    Van So Le

    2014-06-01

    Full Text Available Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of “early elution schedule” was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement

  13. Risk based test interval and maintenance optimisation - Application and uses

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  14. Application of ant colony optimisation in distribution transformer sizing

    This study proposes an optimisation method for transformer sizing in power system using ant colony optimisation and a verification of the process by MATLAB software. The aim is to address the issue of transformer sizing which is a major challenge affecting its effective performance, longevity, huge capital cost and power ...

  15. Optimising of Steel Fiber Reinforced Concrete Mix Design | Beddar ...

    Optimising of Steel Fiber Reinforced Concrete Mix Design. ... as a result of the loss of mixture workability that will be translated into a difficult concrete casting in site. ... An experimental study of an optimisation method of fibres in reinforced ...

  16. Optimisation of radiation protection

    1988-01-01

    Optimisation of radiation protection is one of the key elements in the current radiation protection philosophy. The present system of dose limitation was issued in 1977 by the International Commission on Radiological Protection (ICRP) and includes, in addition to the requirements of justification of practices and limitation of individual doses, the requirement that all exposures be kept as low as is reasonably achievable, taking social and economic factors into account. This last principle is usually referred to as optimisation of radiation protection, or the ALARA principle. The NEA Committee on Radiation Protection and Public Health (CRPPH) organised an ad hoc meeting, in liaison with the NEA committees on the safety of nuclear installations and radioactive waste management. Separate abstracts were prepared for individual papers presented at the meeting

  17. Identification of metabolic system parameters using global optimization methods

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  18. A Global Network Alignment Method Using Discrete Particle Swarm Optimization.

    Huang, Jiaxiang; Gong, Maoguo; Ma, Lijia

    2016-10-19

    Molecular interactions data increase exponentially with the advance of biotechnology. This makes it possible and necessary to comparatively analyse the different data at a network level. Global network alignment is an important network comparison approach to identify conserved subnetworks and get insight into evolutionary relationship across species. Network alignment which is analogous to subgraph isomorphism is known to be an NP-hard problem. In this paper, we introduce a novel heuristic Particle-Swarm-Optimization based Network Aligner (PSONA), which optimizes a weighted global alignment model considering both protein sequence similarity and interaction conservations. The particle statuses and status updating rules are redefined in a discrete form by using permutation. A seed-and-extend strategy is employed to guide the searching for the superior alignment. The proposed initialization method "seeds" matches with high sequence similarity into the alignment, which guarantees the functional coherence of the mapping nodes. A greedy local search method is designed as the "extension" procedure to iteratively optimize the edge conservations. PSONA is compared with several state-of-art methods on ten network pairs combined by five species. The experimental results demonstrate that the proposed aligner can map the proteins with high functional coherence and can be used as a booster to effectively refine the well-studied aligners.

  19. Global regularization method for planar restricted three-body problem

    Sharaf M.A.

    2015-01-01

    Full Text Available In this paper, global regularization method for planar restricted three-body problem is purposed by using the transformation z = x+iy = ν cos n(u+iv, where i = √−1, 0 < ν ≤ 1 and n is a positive integer. The method is developed analytically and computationally. For the analytical developments, analytical solutions in power series of the pseudotime τ are obtained for positions and velocities (u, v, u', v' and (x, y, x˙, y˙ in both regularized and physical planes respectively, the physical time t is also obtained as power series in τ. Moreover, relations between the coefficients of the power series are obtained for two consequent values of n. Also, we developed analytical solutions in power series form for the inverse problem of finding τ in terms of t. As typical examples, three symbolic expressions for the coefficients of the power series were developed in terms of initial values. As to the computational developments, the global regularized equations of motion are developed together with their initial values in forms suitable for digital computations using any differential equations solver. On the other hand, for numerical evolutions of power series, an efficient method depending on the continued fraction theory is provided.

  20. Agent-Based Decision Control—How to Appreciate Multivariate Optimisation in Architecture

    Negendahl, Kristoffer; Perkov, Thomas Holmer; Kolarik, Jakub

    2015-01-01

    , the method is applied to a multivariate optimisation problem. The aim is specifically to demonstrate optimisation for entire building energy consumption, daylight distribution and capital cost. Based on the demonstrations Moth’s ability to find local minima is discussed. It is concluded that agent-based...... in the early design stage. The main focus is to demonstrate the optimisation method, which is done in two ways. Firstly, the newly developed agent-based optimisation algorithm named Moth is tested on three different single objective search spaces. Here Moth is compared to two evolutionary algorithms. Secondly...... optimisation algorithms like Moth open up for new uses of optimisation in the early design stage. With Moth the final outcome is less dependent on pre- and post-processing, and Moth allows user intervention during optimisation. Therefore, agent-based models for optimisation such as Moth can be a powerful...

  1. Advanced optimisation - coal fired power plant operations

    Turney, D.M.; Mayes, I. [E.ON UK, Nottingham (United Kingdom)

    2005-03-01

    The purpose of this unit optimization project is to develop an integrated approach to unit optimisation and develop an overall optimiser that is able to resolve any conflicts between the individual optimisers. The individual optimisers have been considered during this project are: on-line thermal efficiency package, GNOCIS boiler optimiser, GNOCIS steam side optimiser, ESP optimisation, and intelligent sootblowing system. 6 refs., 7 figs., 3 tabs.

  2. Do Methods Matter in Global Leadership Development? Testing the Global Leadership Development Ecosystem Conceptual Model

    Walker, Jennie L.

    2018-01-01

    As world communication, technology, and trade become increasingly integrated through globalization, multinational corporations seek employees with global leadership skills. However, the demand for these skills currently outweighs the supply. Given the rarity of globally ready leaders, global competency development should be emphasized in business…

  3. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  4. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  5. Methods of Bank Valuation in the Age of Globalization

    Alexander Karminsky

    2015-01-01

    Full Text Available This paper reviews the theory ofvalue-based management at the commercial bank and the main valuation methods in the age of globalization. The paper identifies five main factors that significantly influence valuation models selection and building: funding, liquidity, risks, exogenous factors and the capital cushion. It is shown that valuation models can be classified depending on underlying cash flows. Particular attention is paid to models based on potentially available cash flows (Discounted cash flow-oriented approaches, DCF and models based on residual income flows (Residual income-oriented approaches. In addition, we consider an alternative approach based on comparison with same sector banks (based on multiples. For bank valuation equity discounted сash flow method is recommended (Equity DCF. Equity DCF values equity value of a bank directly by discounting cash flows to equity at the cost of equity (Capital Asset Pricing Model, CAPM, rather than at the weighted average cost of capital (WACC. For the purposes of operational management residual income-oriented approaches are recommended for use, because they are better aligned with the process of internal planning and forecasting in banks. For strategic management residual income-oriented methods most useful when expected cash flows are negative throughout the forecast period. Discounted сash flow-oriented approaches are preferable when expected cash flows have positive values and needs for models using is motivated by supporting the investment decisions. Proposed classification can be developed in interests of bank management tasks in the midterm in the age of globalization.

  6. Optimisation of monochrome images

    Potter, R.

    1983-01-01

    Gamma cameras with modern imaging systems usually digitize the signals to allow storage and processing of the image in a computer. Although such computer systems are widely used for the extraction of quantitative uptake estimates and the analysis of time variant data, the vast majority of nuclear medicine images is still interpreted on the basis of an observer's visual assessment of a photographic hardcopy image. The optimisation of hardcopy devices is therefore vital and factors such as resolution, uniformity, noise grey scales and display matrices are discussed. Once optimum display parameters have been determined, routine procedures for quality control need to be established; suitable procedures are discussed. (U.K.)

  7. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  8. Measuring Globalization: Existing Methods and Their Implications for Teaching Global Studies and Forecasting

    Zinkina, Julia; Korotayev, Andrey; Andreev, Aleksey I.

    2013-01-01

    Purpose: The purpose of this paper is to encourage discussions regarding the existing approaches to globalization measurement (taking mainly the form of indices and rankings) and their shortcomings in terms of applicability to developing Global Studies curricula. Another aim is to propose an outline for the globalization measurement methodology…

  9. Reflector modelization in neutronic and optimization methods applied to fuel loading pattern; Modelisation du reflecteur en neutronique et methodes d`optimisation appliquees aux plans de rechargement

    Argaud, J P

    1995-12-01

    I Physical description of P.W.R nuclear core can be handled by multigroup neutronic diffusion model. We are interested in two problems, using the same approach for the optimization aspect. To deal with some differences between calculations and measurements, the question of their reduction is then introduced. A reflector parameters identification from core measurements is then purposed, the reflector being at the present time the less known part of core diffusion model. This approach conducts to study the reflector model, in particular by an analysis of its transport origin. It leads finally to a new model of reflector described by boundary operators using an integral formulation on the core/reflector interface. That is on this new model that a parameter identification formulation of calculations-measurements differences reduction is given, using an adjoint state formulation to minimize errors by a gradient method. Furthermore, nuclear fuel reload of P.W.R core needs an optimal distribution of fuel assemblies, namely a loading pattern. This combinatorial optimization problem is then expressed as a cost function minimization, the cost function describing the power spatial distribution. Various methods (linear programming, simulated annealing,...), used to solve this problem, are detailed, given in particular a practical search example. A new approach is then proposed, using the gradient of the cost function to direct the search in the patterns discrete space. Final results of complete patterns search trials are presented, and compared to those obtained by other methods. In particular the results are obtained very quickly. (author). 81 refs., 55 figs., 5 appends.

  10. Optimisation of logistics processes of energy grass collection

    Bányai, Tamás.

    2010-05-01

    objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social

  11. Exploration of Stellarator Configuration Space with Global Search Methods

    Mynick, H.E.; Pomphrey, N.; Ethier, S.

    2001-01-01

    An exploration of stellarator configuration space z for quasi-axisymmetric stellarator (QAS) designs is discussed, using methods which provide a more global view of that space. To this end, we have implemented a ''differential evolution'' (DE) search algorithm in an existing stellarator optimizer, which is much less prone to become trapped in local, suboptimal minima of the cost function chi than the local search methods used previously. This search algorithm is complemented by mapping studies of chi over z aimed at gaining insight into the results of the automated searches. We find that a wide range of the attractive QAS configurations previously found fall into a small number of classes, with each class corresponding to a basin of chi(z). We develop maps on which these earlier stellarators can be placed, the relations among them seen, and understanding gained into the physics differences between them. It is also found that, while still large, the region of z space containing practically realizable QAS configurations is much smaller than earlier supposed

  12. A comparison of forward planning and optimised inverse planning

    Oldham, Mark; Neal, Anthony; Webb, Steve

    1995-01-01

    A radiotherapy treatment plan optimisation algorithm has been applied to 48 prostate plans and the results compared with those of an experienced human planner. Twelve patients were used in the study, and a 3, 4, 6 and 8 field plan (with standard coplanar beam angles for each plan type) were optimised by both the human planner and the optimisation algorithm. The human planner 'optimised' the plan by conventional forward planning techniques. The optimisation algorithm was based on fast-simulated-annealing. 'Importance factors' assigned to different regions of the patient provide a method for controlling the algorithm, and it was found that the same values gave good results for almost all plans. The plans were compared on the basis of dose statistics and normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results show that the optimisation algorithm yielded results that were at least as good as the human planner for all plan types, and on the whole slightly better. A study of the beam-weights chosen by the optimisation algorithm and the planner will be presented. The optimisation algorithm showed greater variation, in response to individual patient geometry. For simple (e.g. 3 field) plans it was found to consistently achieve slightly higher TCP and lower NTCP values. For more complicated (e.g. 8 fields) plans the optimisation also achieved slightly better results with generally less numbers of beams. The optimisation time was always ≤5 minutes; a factor of up to 20 times faster than the human planner

  13. Active vibration reduction of a flexible structure bonded with optimised piezoelectric pairs using half and quarter chromosomes in genetic algorithms

    Daraji, A H; Hale, J M

    2012-01-01

    The optimal placement of sensors and actuators in active vibration control is limited by the number of candidates in the search space. The search space of a small structure discretized to one hundred elements for optimising the location of ten actuators gives 1.73 × 10 13 possible solutions, one of which is the global optimum. In this work, a new quarter and half chromosome technique based on symmetry is developed, by which the search space for optimisation of sensor/actuator locations in active vibration control of flexible structures may be greatly reduced. The technique is applied to the optimisation for eight and ten actuators located on a 500×500mm square plate, in which the search space is reduced by up to 99.99%. This technique helps for updating genetic algorithm program by updating natural frequencies and mode shapes in each generation to find the global optimal solution in a greatly reduced number of generations. An isotropic plate with piezoelectric sensor/actuator pairs bonded to its surface was investigated using the finite element method and Hamilton's principle based on first order shear deformation theory. The placement and feedback gain of ten and eight sensor/actuator pairs was optimised for a cantilever and clamped-clamped plate to attenuate the first six modes of vibration, using minimization of linear quadratic index as an objective function.

  14. Optimisation of tungsten ore processing through a deep mineralogical characterisation and the study of the crushing process

    Bascompte Vaquero, Jordi

    2017-01-01

    The unstoppable increasing global demand for metals calls for an urgent development of more efficient extraction and processing methods in the mining industry. Comminution is responsible for nearly half of the energy consumption of the entire mining process, and in the majority of the cases it is far from being optimised. Inside comminution, grinding is widely known for being more inefficient than crushing, however, it is needed to reach liberation at an ultrafine particle size. ...

  15. Hardware architecture design of a fast global motion estimation method

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  16. Effective Energy Methods for Global Optimization for Biopolymer Structure Prediction

    Shalloway, David

    1998-01-01

    .... Its main strength is that it uncovers and exploits the intrinsic "hidden structures" of biopolymer energy landscapes to efficiently perform global minimization using a hierarchical search procedure...

  17. Optimisation of occupational exposure

    Webb, G.A.M.; Fleishman, A.B.

    1982-01-01

    The general concept of the optimisation of protection of the public is briefly described. Some ideas being developed for extending the cost benefit framework to include radiation workers with full implementation of the ALARA criterion are described. The role of cost benefit analysis in radiological protection and the valuation of health detriment including the derivation of monetary values and practical implications are discussed. Cost benefit analysis can lay out for inspection the doses, the associated health detriment costs and the costs of protection for alternative courses of action. However it is emphasised that the cost benefit process is an input to decisions on what is 'as low as reasonably achievable' and not a prescription for making them. (U.K.)

  18. Standardised approach to optimisation

    Warren-Forward, Helen M.; Beckhaus, Ronald

    2004-01-01

    Optimisation of radiographic images is said to have been obtained if the patient has achieved an acceptable level of dose and the image is of diagnostic value. In the near future, it will probably be recommended that radiographers measure patient doses and compare them to reference levels. The aim of this paper is to describe a standardised approach to optimisation of radiographic examinations in a diagnostic imaging department. A three-step approach is outlined with specific examples for some common examinations (chest, abdomen, pelvis and lumbar spine series). Step One: Patient doses are calculated. Step Two: Doses are compared to existing reference levels and the technique used compared to image quality criteria. Step Three: Appropriate action is taken if doses are above the reference level. Results: Average entrance surface doses for two rooms were as follows AP Abdomen (6.3mGy and 3.4mGy); AP Lumbar Spine (6.4mGy and 4.1mGy) for AP Pelvis (4.8mGy and 2.6mGy) and PA chest (0.19mGy and 0.20mGy). Comparison with the Commission of the European Communities (CEC) recommended techniques identified large differences in the applied potential. The kVp values in this study were significantly lower (by up to lOkVp) than the CEC recommendations. The results of this study have indicated that there is a need to monitor radiation doses received by patients undergoing diagnostic radiography examinations. Not only has the assessment allowed valuable comparison with International Diagnostic Reference Levels and Radiography Good Practice but has demonstrated large variations in mean doses being delivered from different rooms of the same radiology department. Following the simple 3-step approach advocated in this paper should either provide evidence that department are practising the ALARA principle or assist in making suitable changes to current practice. Copyright (2004) Australian Institute of Radiography

  19. Techno-economic optimisation of energy systems

    Mansilla Pellen, Ch.

    2006-07-01

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  20. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  1. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  2. Methods for researching intercultural communication in globalized complex societies

    Jensen, Iben; Andreasen, Lars Birch

    2014-01-01

    The field of intercultural communication research is challenged theoretically as well as methodologically by global changes such as migration, global mobility, mass media, tourism, etc. According to these changes cultures can no longer be seen as national entities, and cultural identity can...

  3. Globalizing Education, Educating the Local: How Method Made Us Mad

    Stronach, Ian

    2011-01-01

    This book offers a critical and deconstructive account of global discourses on education, arguing that these overblown "hypernarratives" are neither economically, technically nor philosophically defensible. Nor even sane. Their "mythic economic instrumentalism" mimic rather than meet the economic needs of global capitalism in…

  4. Optimisation of the parameters of a pump chamber for solid-state lasers with diode pumping by the optical boiler method

    Kiyko, V V; Kislov, V I; Ofitserov, E N; Suzdal' tsev, A G [A M Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation)

    2015-06-30

    A pump chamber of the optical boiler type for solid-state lasers with transverse laser diode pumping is studied theoretically and experimentally. The pump chamber parameters are optimised using the geometrical optics approximation for the pump radiation. According to calculations, the integral absorption coefficient of the active element at a wavelength of 808 nm is 0.75 – 0.8 and the relative inhomogeneity of the pump radiation distribution over the active element volume is 17% – 19%. The developed pump chamber was used in a Nd:YAG laser. The maximum cw output power at a wavelength of 1064 nm was ∼480 W at the optical efficiency up to 19.6%, which agrees with theoretical estimates. (lasers)

  5. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  6. Topology optimised wavelength dependent splitters

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  7. The dismantling of nuclear installations: The dismantling of nuclear installations at the CEA's Directorate for nuclear energy; The CEA's sanitation and dismantling works: example of one of the Marcoule UP1 program lots; Research and innovation in sanitation-dismantling; Global optimisation of the management of dismantling radioactive wastes

    Hauet, Jean-Pierre; Piketty, Laurence; Moitrier, Cyril; Blanchard, Samuel; Soulabaille, Yves; Georges, Christine; Dutzer, Michel; Legee, Frederic

    2016-01-01

    This publication proposes a set of four articles which addresses issues related to the dismantling of nuclear installations in France, notably for the different involved actors such as the CEA and the ANDRA. The authors more particularly address the issue and the general strategy of dismantling within the Directorate for nuclear energy of the CEA; comment the example of one of the Marcoule UP1 program lots to highlight sanitation and dismantling works performed by the CEA; discuss current research and innovation activities within the CEA regarding sanitation and dismantling; and comment how to globally optimise the management of radioactive wastes produced by dismantling activities

  8. Mesh dependence in PDE-constrained optimisation an application in tidal turbine array layouts

    Schwedes, Tobias; Funke, Simon W; Piggott, Matthew D

    2017-01-01

    This book provides an introduction to PDE-constrained optimisation using finite elements and the adjoint approach. The practical impact of the mathematical insights presented here are demonstrated using the realistic scenario of the optimal placement of marine power turbines, thereby illustrating the real-world relevance of best-practice Hilbert space aware approaches to PDE-constrained optimisation problems. Many optimisation problems that arise in a real-world context are constrained by partial differential equations (PDEs). That is, the system whose configuration is to be optimised follows physical laws given by PDEs. This book describes general Hilbert space formulations of optimisation algorithms, thereby facilitating optimisations whose controls are functions of space. It demonstrates the importance of methods that respect the Hilbert space structure of the problem by analysing the mathematical drawbacks of failing to do so. The approaches considered are illustrated using the optimisation problem arisin...

  9. Turbulence optimisation in stellarator experiments

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  10. SPS batch spacing optimisation

    Velotti, F M; Carlier, E; Goddard, B; Kain, V; Kotzian, G

    2017-01-01

    Until 2015, the LHC filling schemes used the batch spac-ing as specified in the LHC design report. The maximumnumber of bunches injectable in the LHC directly dependson the batch spacing at injection in the SPS and hence onthe MKP rise time.As part of the LHC Injectors Upgrade project for LHCheavy ions, a reduction of the batch spacing is needed. In thisdirection, studies to approach the MKP design rise time of150ns(2-98%) have been carried out. These measurementsgave clear indications that such optimisation, and beyond,could be done also for higher injection momentum beams,where the additional slower MKP (MKP-L) is needed.After the successful results from 2015 SPS batch spacingoptimisation for the Pb-Pb run [1], the same concept wasthought to be used also for proton beams. In fact, thanksto the SPS transverse feed back, it was already observedthat lower batch spacing than the design one (225ns) couldbe achieved. For the 2016 p-Pb run, a batch spacing of200nsfor the proton beam with100nsbunch spacing wasreque...

  11. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  12. Combining optimisation and simulation in an energy systems analysis of a Swedish iron foundry

    Mardan, Nawzad; Klahr, Roger

    2012-01-01

    To face global competition, and also reduce environmental and climate impact, industry-wide changes are needed, especially regarding energy use, which is closely related to global warming. Energy efficiency is therefore an essential task for the future as it has a significant impact on both business profits and the environment. For the analysis of possible changes in industrial production processes, and to choose what changes should be made, various modelling tools can be used as a decision support. This paper uses two types of energy analysis tool: Discrete Event Simulation (DES) and Energy Systems Optimisation (ESO). The aim of this study is to describe how a DES and an ESO tool can be combined. A comprehensive five-step approach is proposed for reducing system costs and making a more robust production system. A case study representing a new investment in part of a Swedish iron foundry is also included to illustrate the method's use. The method described in this paper is based on the use of the DES program QUEST and the ESO tool reMIND. The method combination itself is generic, i.e. other similar programs can be used as well with some adjustments and adaptations. The results from the case study show that when different boundary conditions are used the result obtained from the simulation tools is not optimum, in other words, the result shows only a feasible solution and not the best way to run the factory. It is therefore important to use the optimisation tool in such cases in order to obtain the optimum operating strategy. By using the optimisation tool a substantial amount of resources can be saved. The results also show that the combination of optimisation and simulation tools is useful to provide very detailed information about how the system works and to predict system behaviour as well as to minimise the system cost. -- Highlights: ► This study describes how a simulation and an optimisation tool can be combined. ► A case study representing a new

  13. Globalization

    Tulio Rosembuj

    2006-12-01

    Full Text Available There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.

  14. Globalization

    Tulio Rosembuj

    2006-01-01

    There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.

  15. Global and Local Sensitivity Analysis Methods for a Physical System

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  16. Methods for global sensitivity analysis in life cycle assessment

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  17. Vaccine strategies: Optimising outcomes.

    Hardt, Karin; Bonanni, Paolo; King, Susan; Santos, Jose Ignacio; El-Hodhod, Mostafa; Zimet, Gregory D; Preiss, Scott

    2016-12-20

    factors that encourage success, which often include strong support from government and healthcare organisations, as well as tailored, culturally-appropriate local approaches to optimise outcomes. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Exploration of automatic optimisation for CUDA programming

    Al-Mouhamed, Mayez

    2014-09-16

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  19. Exploration of automatic optimisation for CUDA programming

    Al-Mouhamed, Mayez; Khan, Ayaz ul Hassan

    2014-01-01

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  20. Optimisation and symmetry in experimental radiation physics

    Ghose, A.

    1988-01-01

    The present monograph is concerned with the optimisation of geometric factors in radiation physics experiments. The discussions are essentially confined to those systems in which optimisation is equivalent to symmetrical configurations of the measurement systems. They include, measurements of interaction cross section of diverse types, determination of polarisations, development of detectors with almost ideal characteristics, production of radiations with continuously variable energies and development of high efficiency spectrometers etc. The monograph is intended for use by experimental physicists investigating primary interactions of radiations with matter and associated technologies. We have illustrated the various optimisation procedures by considering the cases of the so-called ''14 MeV'' on d-t neutrons and gamma rays with energies less than 3 MeV. Developments in fusion technology are critically dependent on the availability accurate cross sections of nuclei for fast neutrons of energies at least as high as d-t neutrons. In this monograph we have discussed various techniques which can be used to improve the accuracy of such measurements and have also presented a method for generating almost monoenergetic neutrons in the 8 MeV to 13 MeV energy range which can be used to measure cross sections in this sparingly investigated region

  1. Cultural-based particle swarm for dynamic optimisation problems

    Daneshyari, Moayed; Yen, Gary G.

    2012-07-01

    Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.

  2. Globalization

    Andru?cã Maria Carmen

    2013-01-01

    The field of globalization has highlighted an interdependence implied by a more harmonious understanding determined by the daily interaction between nations through the inducement of peace and the management of streamlining and the effectiveness of the global economy. For the functioning of the globalization, the developing countries that can be helped by the developed ones must be involved. The international community can contribute to the institution of the development environment of the gl...

  3. Global/local methods research using a common structural analysis framework

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.

    1991-01-01

    Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  4. Getting agile methods to work for Cordys global software product development

    van Hillegersberg, Jos; Ligtenberg, Gerwin; Aydin, M.N.; Kotlarsky, J.; Willcocks, L.P.; Oshri, I.

    2011-01-01

    Getting agile methods to work in global software development is a potentially rewarding but challenging task. Agile methods are relatively young and still maturing. The application to globally distributed projects is in its early stages. Various guidelines on how to apply and sometimes adapt agile

  5. Conventional method for the calculation of the global energy cost of buildings; Methode conventionnelle de calcul du cout global energetique des batiments

    NONE

    2002-05-01

    A working group driven by Electricite de France (EdF), Chauffage Fioul and Gaz de France (GdF) companies has been built with the sustain of several building engineering companies in order to clarify the use of the method of calculation of the global energy cost of buildings. This global cost is an economical decision help criterion among others. This press kit presents, first, the content of the method (input data, calculation of annual expenses, calculation of the global energy cost, display of results and limitations of the method). Then it fully describes the method and its appendixes necessary for its implementation: economical and financial context, general data of the project in progress, environmental data, occupation and comfort level, variants, investment cost of energy systems, investment cost for the structure linked with the energy system, investment cost for other invariant elements of the structure, calculation of consumptions (space heating, hot water, ventilation), maintenance costs (energy systems, structure), operation and exploitation costs, tariffs and consumption costs and taxes, actualized global cost, annualized global cost, comparison between variants. The method is applied to a council building of 23 flats taken as an example. (J.S.)

  6. Isogeometric Analysis and Shape Optimisation

    Gravesen, Jens; Evgrafov, Anton; Gersborg, Allan Roulund

    of the whole domain. So in every optimisation cycle we need to extend a parametrisation of the boundary of a domain to the whole domain. It has to be fast in order not to slow the optimisation down but it also has to be robust and give a parametrisation of high quality. These are conflicting requirements so we...... will explain how the validity of a parametrisation can be checked and we will describe various ways to parametrise a domain. We will in particular study the Winslow functional which turns out to have some desirable properties. Other problems we touch upon is clustering of boundary control points (design...

  7. Preconditioned stochastic gradient descent optimisation for monomodal image registration

    Klein, S.; Staring, M.; Andersson, J.P.; Pluim, J.P.W.; Fichtinger, G.; Martel, A.; Peters, T.

    2011-01-01

    We present a stochastic optimisation method for intensity-based monomodal image registration. The method is based on a Robbins-Monro stochastic gradient descent method with adaptive step size estimation, and adds a preconditioning matrix. The derivation of the pre-conditioner is based on the

  8. Heuristic method for searching global maximum of multimodal unknown function

    Kamei, K; Araki, Y; Inoue, K

    1983-06-01

    The method is composed of three kinds of searches. They are called g (grasping)-mode search, f (finding)-mode search and c (confirming)-mode search. In the g-mode search and the c-mode search, a heuristic method is used which was extracted from search behaviors of human subjects. In f-mode search, the simplex method is used which is well known as a search method for unimodal unknown function. Each mode search and its transitions are shown in the form of flowchart. The numerical results for one-dimensional through six-dimensional multimodal functions prove the proposed search method to be an effective one. 11 references.

  9. Simulation and optimisation modelling approach for operation of the Hoa Binh Reservoir, Vietnam

    Ngo, Long le; Madsen, Henrik; Rosbjerg, Dan

    2007-01-01

    Hoa Binh, the largest reservoir in Vietnam, plays an important role in flood control for the Red River delta and hydropower generation. Due to its multi-purpose character, conflicts and disputes in operating the reservoir have been ongoing since its construction, particularly in the flood season....... This paper proposes to optimise the control strategies for the Hoa Binh reservoir operation by applying a combination of simulation and optimisation models. The control strategies are set up in the MIKE 11 simulation model to guide the releases of the reservoir system according to the current storage level......, the hydro-meteorological conditions, and the time of the year. A heuristic global optimisation tool, the shuffled complex evolution (SCE) algorithm, is adopted for optimising the reservoir operation. The optimisation puts focus on the trade-off between flood control and hydropower generation for the Hoa...

  10. Similar estimates of temperature impacts on global wheat yield by three independent methods

    Liu, Bing; Asseng, Senthold; Müller, Christoph

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produ......-method ensemble, it was possible to quantify ‘method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.......The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce...... similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries...

  11. Evaluation of perioperative nutritional status with subjective global assessment method in patients undergoing gastrointestinal cancer surgery.

    Erdim, Aylin; Aktan, Ahmet Özdemir

    2017-01-01

    This study was designed to evaluate the perioperative nutritional status of patients undergoing surgery for gastrointestinal cancer using Subjective Global Assessment and surgeon behavior on nutritional support. We recruited 100 patients undergoing surgery for gastrointestinal cancer in one university and two state teaching hospitals. Subjective Global Assessment was administered to evaluate preoperative and postoperative nutritional status. Fifty-two patients in the state hospitals (Group 1) and 48 in the university hospital were assessed. Anthropometric and biochemical measurements were performed. Changes in preoperative Subjective Global Assessment scores and scores at the time of discharge and types of nutritional support were compared. Subjective Global Assessment-B was regarded as moderate and Subjective Global Assessment-C as heavy malnutrition. Ten patients had Subjective Global Assessment-B and 29 had Subjective Global Assessment-C malnutrition in Group 1 and nine had Subjective Global Assessment-B and 31 had Subjective Global Assessment-C malnutrition in Group 2 during preoperative assessment. Respective numbers in postoperative assessment were 12 for Subjective Global Assessment-B and 30 for Subjective Global Assessment-C in Group 1 and 14 for Subjective Global Assessment-B and 26 for Subjective Global Assessment-C in Group 2. There was no difference between two groups. Nutritional methods according to Subjective Global Assessment evaluation in pre- and postoperative periods were not different between the groups. This study demonstrated that the malnutrition rate is high among patients scheduled for gastrointestinal cancer surgery and the number of surgeons were inadequate to provide perioperative nutritional support. Both university and state hospitals had similar shortcomings. Subjective Global Assessment is an easy and reliable test and if utilized will be helpful to detect patients requiring nutritional support.

  12. Cogeneration technologies, optimisation and implementation

    Frangopoulos, Christos A

    2017-01-01

    Cogeneration refers to the use of a power station to deliver two or more useful forms of energy, for example, to generate electricity and heat at the same time. This book provides an integrated treatment of cogeneration, including a tour of the available technologies and their features, and how these systems can be analysed and optimised.

  13. For Time-Continuous Optimisation

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  14. Analysis of global multiscale finite element methods for wave equations with continuum spatial scales

    Jiang, Lijian; Efendiev, Yalchin; Ginting, Victor

    2010-01-01

    In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.

  15. Analysis of global multiscale finite element methods for wave equations with continuum spatial scales

    Jiang, Lijian

    2010-08-01

    In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.

  16. Optimising end of generation of Magnox reactors

    Hall, D.; Hopper, E.D.A.

    2014-01-01

    Designing, justifying and gaining regulatory approval for optimised, terminal fuel cycles for the last 4 of the 13 strong Magnox Fleet is described, covering: - constraints set by the plant owner's integrated closure plan, opportunities for innovative fuel cycles while preserving flexibility to respond to business changes; - methods of collectively determining best options for each site; - selected strategies including lower fuel element retention and inter-reactor transfer of fuel; - the required work scope, its technical, safety case and resource challenges and how they were met; - achieving additional electricity generation worth in excess of Pound 1 b from 4 sites (a total of 8 reactors); - the keys to success. (authors)

  17. Computational Experience with Globally Convergent Descent Methods for Large Sparse Systems of Nonlinear Equations

    Lukšan, Ladislav; Vlček, Jan

    1998-01-01

    Roč. 8, č. 3-4 (1998), s. 201-223 ISSN 1055-6788 R&D Projects: GA ČR GA201/96/0918 Keywords : nonlinear equations * Armijo-type descent methods * Newton-like methods * truncated methods * global convergence * nonsymmetric linear systems * conjugate gradient -type methods * residual smoothing * computational experiments Subject RIV: BB - Applied Statistics, Operational Research

  18. Global metabolite analysis of yeast: evaluation of sample preparation methods

    Villas-Bôas, Silas Granato; Højer-Pedersen, Jesper; Åkesson, Mats Fredrik

    2005-01-01

    Sample preparation is considered one of the limiting steps in microbial metabolome analysis. Eukaryotes and prokaryotes behave very differently during the several steps of classical sample preparation methods for analysis of metabolites. Even within the eukaryote kingdom there is a vast diversity...

  19. Similar estimates of temperature impacts on global wheat yield by three independent methods

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Supit, Iwan; Wolf, Joost

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO 2 fertilization effects,

  20. A New Method of Global Path Planning for AGV

    SHI En-xiu; HUANG Yu-mei

    2006-01-01

    Path planning is important in the research of a mobile robot (MR). Methods for it have been used in different applications. An automated guided vehicle(AGV), which is a kind of MR, is used in a flexible manufacturing system(FMS). Path planning for it is essential to improve the efficiency of FMS. A new method was proposed with known obstacle space FMS in this paper. FMS is described by the Augmented Pos Matrix of a Machine (APMM) and Relative Pos Matrix of Machines (RPMM), which is smaller. The optimum path can be obtained according to the probability of the path and the maximal probability path. The suggested algorithm of path planning was good performance through simulation result: simplicity, saving time and reliability.

  1. The Methodical Approaches to the Research of Informatization of the Global Economic Development

    Kazakova Nadezhda A.

    2018-03-01

    Full Text Available The article is aimed at researching the identification of global economic development informatization. The complex of issues connected with research of development of informatization of the world countries in the conditions of globalization is considered. The development of informatization in the global economic space, which facilitates opening of new markets for international trade enterprises, international transnational corporations and other organizations, which not only provide exports, but also create production capacities for local producers. The methodical approach which includes three stages together with formation of the input information on the status of informatization of the global economic development of the world countries has been proposed.

  2. Globalization

    Plum, Maja

    Globalization is often referred to as external to education - a state of affair facing the modern curriculum with numerous challenges. In this paper it is examined as internal to curriculum; analysed as a problematization in a Foucaultian sense. That is, as a complex of attentions, worries, ways...... of reasoning, producing curricular variables. The analysis is made through an example of early childhood curriculum in Danish Pre-school, and the way the curricular variable of the pre-school child comes into being through globalization as a problematization, carried forth by the comparative practices of PISA...

  3. Globalization

    F. Gerard Adams

    2008-01-01

    The rapid globalization of the world economy is causing fundamental changes in patterns of trade and finance. Some economists have argued that globalization has arrived and that the world is “flat†. While the geographic scope of markets has increased, the author argues that new patterns of trade and finance are a result of the discrepancies between “old†countries and “new†. As the differences are gradually wiped out, particularly if knowledge and technology spread worldwide, the t...

  4. HVAC system optimisation-in-building section

    Lu, L.; Cai, W.; Xie, L.; Li, S.; Soh, Y.C. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore (Singapore)

    2004-07-01

    This paper presents a practical method to optimise in-building section of centralised Heating, Ventilation and Air-Conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimisation method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  5. Global Seabed Materials and Habitats Mapped: The Computational Methods

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  6. Global Modeling of Microwave Three Terminal Active Devices Using the FDTD Method

    Mrabet, O. E; Essaaidi, M; Drissi, M'hamed

    2005-01-01

    This paper presents a new approach for the global electromagnetic analysis of the three-Terminal active linear and nonlinear microwave circuits using the Finite-Difference Time Domain (FDTD) Method...

  7. Global Convergence of Schubert’s Method for Solving Sparse Nonlinear Equations

    Huiping Cao

    2014-01-01

    Full Text Available Schubert’s method is an extension of Broyden’s method for solving sparse nonlinear equations, which can preserve the zero-nonzero structure defined by the sparse Jacobian matrix and can retain many good properties of Broyden’s method. In particular, Schubert’s method has been proved to be locally and q-superlinearly convergent. In this paper, we globalize Schubert’s method by using a nonmonotone line search. Under appropriate conditions, we show that the proposed algorithm converges globally and superlinearly. Some preliminary numerical experiments are presented, which demonstrate that our algorithm is effective for large-scale problems.

  8. Design, modeling and optimization of poly-air gap actuators with global coils: application to multi-rod linear structures; Conception, modelisation et optimisation des actionneurs polyentrefers a bobinages globaux: application aux structures lineaires multi-tiges

    Cavarec, P.E.

    2002-11-15

    The aim of this thesis is the study and the conception of splitted structures of global coil synchronous machines for the maximization of specific torque or thrust. This concept of machine, called multi-air gap, is more precisely applied to the elaboration of a new linear multi-rods actuator. It is clearly connected to the context of direct drive solutions. First, a classification of different electromagnetic actuator families gives the particular place of multi-air gaps actuators. Then, a study, based on geometrical parameters optimizations, underlines the interest of that kind of topology for reaching very high specific forces and mechanical dynamics. A similitude law, governing those actuators, is then extracted. A study of mechanical behaviour, taking into account mechanic (tolerance) and normal forces (guidance), is carried out. Hence, methods for filtering the ripple force, and decreasing the parasitic forces without affecting the useful force are presented. This approach drives to the multi-rods structures. A prototype is then tested and validates the feasibility of that kind of devices, and the accuracy of the magnetic models. This motor, having only eight rods for an active volume of one litre, reaches an electromagnetic force of 1000 N in static conditions. A method for estimate optimal performances of multi-rods actuators under several mechanical stresses is presented. (author)

  9. Optimisation of expansion liquefaction processes using mixed refrigerant N_2–CH_4

    Ding, He; Sun, Heng; He, Ming

    2016-01-01

    Highlights: • A refrigerant composition matching method for N_2–CH_4 expansion processes. • Efficiency improvements for propane pre-cooled N_2–CH_4 expansion processes. • The process shows good adaptability to varying natural gas compositions. - Abstract: An expansion process with a pre-cooling system is simulated and optimised by Aspen HYSYS and MATLAB"™. Taking advantage of higher specific refrigeration effect of methane and easily reduced refrigeration temperature of nitrogen, the designed process adopts N_2–CH_4 as a mixed refrigerant. Based on the different thermodynamic properties and sensitivity difference of N_2 and CH_4 over the same heat transfer temperature range, this work proposes a novel method of matching refrigerant composition which aims at single-stage or multi-stage series expansion liquefaction processes with pre-cooling systems. This novel method is applied successfully in propane pre-cooled N_2–CH_4 expansion process, and the unit power consumption is reduced to 7.09 kWh/kmol, which is only 5.35% higher than the global optimised solutions obtained by genetic algorithm. This novel method can fulfil the accomplishments of low energy consumption and high liquefaction rate, and thus decreases the gap between the mixed refrigerant and expansion processes in energy consumption. Furthermore, the high exergy efficiency of the process indicates good adaptability to varying natural gas compositions.

  10. Aspects of approximate optimisation: overcoming the curse of dimensionality and design of experiments

    Trichon, Sophie; Bonte, M.H.A.; Ponthot, Jean-Philippe; van den Boogaard, Antonius H.

    2007-01-01

    Coupling optimisation algorithms to Finite Element Methods (FEM) is a very promising way to achieve optimal metal forming processes. However, many optimisation algorithms exist and it is not clear which of these algorithms to use. This paper investigates the sensitivity of a Sequential Approximate

  11. Optimisation of the PCR-invA primers for the detection of Salmonella ...

    A polymerase chain reaction (PCR)-based method for the detection of Salmonella species in water samples was optimised and evaluated for speed, specificity and sensitivity. Optimisation of Mg2+ and primer concentrations and cycling parameters increased the sensitivity and limit of detection of PCR to 2.6 x 104 cfu/m.

  12. Robustness analysis of bogie suspension components Pareto optimised values

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  13. Optimisation multidisciplinaire de lanceurs

    Balesdent , Mathieu

    2011-01-01

    Optimal design of launch vehicles is a complex multidisciplinary design optimization (MDO) problem that has the distinctive feature of integrating a trajectory optimization which is very constrained, difficult to solve and highly coupled with all the other disciplines involved in the design process (e.g. propulsion, aerodynamics, structure, etc.). This PhD thesis is focused on the methods which allow to adequately integrate the optimal control law calculation into the design variable optimiza...

  14. The Global Optimal Algorithm of Reliable Path Finding Problem Based on Backtracking Method

    Liang Shen

    2017-01-01

    Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.

  15. Global Convergence of a Spectral Conjugate Gradient Method for Unconstrained Optimization

    Jinkui Liu

    2012-01-01

    Full Text Available A new nonlinear spectral conjugate descent method for solving unconstrained optimization problems is proposed on the basis of the CD method and the spectral conjugate gradient method. For any line search, the new method satisfies the sufficient descent condition gkTdk<−∥gk∥2. Moreover, we prove that the new method is globally convergent under the strong Wolfe line search. The numerical results show that the new method is more effective for the given test problems from the CUTE test problem library (Bongartz et al., 1995 in contrast to the famous CD method, FR method, and PRP method.

  16. Optimising Comprehensibility in Interlingual Translation

    Nisbeth Jensen, Matilde

    2015-01-01

    The increasing demand for citizen engagement in areas traditionally belonging exclusively to experts, such as health, law and technology has given rise to the necessity of making expert knowledge available to the general public through genres such as instruction manuals for consumer goods, patien...... the functional text type of Patient Information Leaflet. Finally, the usefulness of applying the principles of Plain Language and intralingual translation for optimising comprehensibility in interlingual translation is discussed....

  17. TEM turbulence optimisation in stellarators

    Proll, J. H. E.; Mynick, H. E.; Xanthopoulos, P.; Lazerson, S. A.; Faber, B. J.

    2016-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is addressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X (Beidler et al 1990 Fusion Technol. 17 148) and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT (Spong et al 2001 Nucl. Fusion 41 711) code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stellarator experiment HSX (Anderson et al 1995 Fusion Technol. 27 273) is presented for which a reduction of the linear growth rates is achieved over a broad range of the operational parameter space. As an important consequence of this property, the turbulent heat flux levels are reduced compared with the initial configuration.

  18. Alternatives for optimisation of rumen fermentation in ruminants

    T. Slavov

    2017-06-01

    Full Text Available Abstract. The proper knowledge on the variety of events occurring in the rumen makes possible their optimisation with respect to the complete feed conversion and increasing the productive performance of ruminants. The inclusion of various dietary additives (supplements, biologically active substances, nutritional antibiotics, probiotics, enzymatic preparations, plant extracts etc. has an effect on the intensity and specific pathway of fermentation, and thus, on the general digestion and systemic metabolism. The optimisation of rumen digestion is a method with substantial potential for improving the efficiency of ruminant husbandry, increasing of quality of their produce and health maintenance.

  19. MANAGEMENT OPTIMISATION OF MASS CUSTOMISATION MANUFACTURING USING COMPUTATIONAL INTELLIGENCE

    Louwrens Butler

    2018-05-01

    Full Text Available Computational intelligence paradigms can be used for advanced manufacturing system optimisation. A static simulation model of an advanced manufacturing system was developed in order to simulate a manufacturing system. The purpose of this advanced manufacturing system was to mass-produce a customisable product range at a competitive cost. The aim of this study was to determine whether this new algorithm could produce a better performance than traditional optimisation methods. The algorithm produced a lower cost plan than that for a simulated annealing algorithm, and had a lower impact on the workforce.

  20. Optimisation of BPMN Business Models via Model Checking

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  1. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  2. Similar estimates of temperature impacts on global wheat yield by three independent methods

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  3. Optimisation of Software-Defined Networks Performance Using a Hybrid Intelligent System

    Ann Sabih

    2017-06-01

    Full Text Available This paper proposes a novel intelligent technique that has been designed to optimise the performance of Software Defined Networks (SDN. The proposed hybrid intelligent system has employed integration of intelligence-based optimisation approaches with the artificial neural network. These heuristic optimisation methods include Genetic Algorithms (GA and Particle Swarm Optimisation (PSO. These methods were utilised separately in order to select the best inputs to maximise SDN performance. In order to identify SDN behaviour, the neural network model is trained and applied. The maximal optimisation approach has been identified using an analytical approach that considered SDN performance and the computational time as objective functions. Initially, the general model of the neural network was tested with unseen data before implementing the model using GA and PSO to determine the optimal performance of SDN. The results showed that the SDN represented by Artificial Neural Network ANN, and optmised by PSO, generated a better configuration with regards to computational efficiency and performance index.

  4. Geometrical exploration of a flux-optimised sodium receiver through multi-objective optimisation

    Asselineau, Charles-Alexis; Corsi, Clothilde; Coventry, Joe; Pye, John

    2017-06-01

    A stochastic multi-objective optimisation method is used to determine receiver geometries with maximum second law efficiency, minimal average temperature and minimal surface area. The method is able to identify a set of Pareto optimal candidates that show advantageous geometrical features, mainly in being able to maximise the intercepted flux within the geometrical boundaries set. Receivers with first law thermal efficiencies ranging from 87% to 91% are also evaluated using the second law of thermodynamics and found to have similar efficiencies of over 60%, highlighting the influence that the geometry can play in the maximisation of the work output of receivers by influencing the distribution of the flux from the concentrator.

  5. Optimisation of staff protection

    Faulkner, K.; Marshall, N.W.; Rawlings, D.J.

    1997-01-01

    It is important to minimize the radiation dose received by staff, but it is particularly important in interventional radiology. Staff doses may be reduced by minimizing the fluoroscopic screening time and number of images, compatible with the clinical objective of the procedure. Staff may also move to different positions in the room in an attempt to reduce doses. Finally, staff should wear appropriate protective clothing to reduce their occupational doses. This paper will concentrate on the optimization of personal shielding in interventional radiology. The effect of changing the lead equivalence of various protective devices on effective dose to staff has been studied by modeling the exposure of staff to realistic scattered radiation. Both overcouch x-ray tube/undercouch image intensified and overcouch image intensifier/undercouch x-ray tube geometries were simulated. It was deduced from this simulation that increasing the lead apron thickness from 0.35 mm lead to 0.5 mm lead had only a small reducing effect. By contrast, wearing a lead rubber thyroid shield or face mask is a superior means of reducing the effective dose to staff. Standing back from the couch when the x-ray tube is emitting radiation is another good method of reducing doses, being better than exchanging a 0.35 mm lead apron for a 0.5 mm apron. In summary, it is always preferable to shield more organs than to increase the thickness of the lead apron. (author)

  6. Optimised Pre-Analytical Methods Improve KRAS Mutation Detection in Circulating Tumour DNA (ctDNA) from Patients with Non-Small Cell Lung Cancer (NSCLC)

    Sherwood, James L.; Corcoran, Claire; Brown, Helen; Sharpe, Alan D.; Musilova, Milena; Kohlmann, Alexander

    2016-01-01

    Introduction Non-invasive mutation testing using circulating tumour DNA (ctDNA) is an attractive premise. This could enable patients without available tumour sample to access more treatment options. Materials & Methods Peripheral blood and matched tumours were analysed from 45 NSCLC patients. We investigated the impact of pre-analytical variables on DNA yield and/or KRAS mutation detection: sample collection tube type, incubation time, centrifugation steps, plasma input volume and DNA extraction kits. Results 2 hr incubation time and double plasma centrifugation (2000 x g) reduced overall DNA yield resulting in lowered levels of contaminating genomic DNA (gDNA). Reduced “contamination” and increased KRAS mutation detection was observed using cell-free DNA Blood Collection Tubes (cfDNA BCT) (Streck), after 72 hrs following blood draw compared to EDTA tubes. Plasma input volume and use of different DNA extraction kits impacted DNA yield. Conclusion This study demonstrated that successful ctDNA recovery for mutation detection in NSCLC is dependent on pre-analytical steps. Development of standardised methods for the detection of KRAS mutations from ctDNA specimens is recommended to minimise the impact of pre-analytical steps on mutation detection rates. Where rapid sample processing is not possible the use of cfDNA BCT tubes would be advantageous. PMID:26918901

  7. Formulation des betons autopla~ants : Optimisation du squelette ...

    Formulation des betons autopla~ants : Optimisation du squelette granulaire par la methode graphique de Dreux - Gorisse. Fatiha Boumaza - Zeraoulia* & Mourad Behim. Laboratoire Materiaux, Geo - Materiaux et Environnement - Departement de Genie Civil. Universite Badji Mokhtar Annaba - BP 12, 23000 Annaba - ...

  8. Parameter Optimisation for the Behaviour of Elastic Models over Time

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  9. Plant-wide performance optimisation – The refrigeration system case

    Green, Torben; Razavi-Far, Roozbeh; Izadi-Zamanabadi, Roozbeh

    2012-01-01

    applicationsin the process industry. The paper addresses the fact that dynamic performance of the system is important, to ensure optimal changes between different operation conditions. To enable optimisation of the dynamic controller behaviour a method for designing the required excitation signal is presented...

  10. Optimisation of selective breeding program for Nile tilapia (Oreochromis niloticus)

    Trong, T.Q.

    2013-01-01

    The aim of this thesis was to optimise the selective breeding program for Nile tilapia in the Mekong Delta region of Vietnam. Two breeding schemes, the “classic” BLUP scheme following the GIFT method (with pair mating) and a rotational mating scheme with own performance selection and

  11. Optimisation of Heterogeneous Migration Paths to High Bandwidth Home Connections

    Phillipson, F.

    2017-01-01

    Operators are building architectures and systems for delivering voice, audio, and data services at the required speed for now and in the future. For fixed access networks, this means in many countries a shift from copper based to fibre based access networks. This paper proposes a method to optimise

  12. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented

  13. Yours, Mine and Ours: Theorizing the Global Articulation of Qualitative Research Methods and Academic Disciplines

    Bryan C. Taylor

    2016-02-01

    Full Text Available Two current forms of globalization are inherently interesting to academic qualitative researchers. The first is the globalization of qualitative research methods themselves. The second is the globalization of academic disciplines in which those methods are institutionalized as a valuable resource for professional practices of teaching and scholarly research. This essay argues that patterns in existing discussion of these two trends create an opportunity for innovative scholarship. That opportunity involves reflexively leveraging qualitative research methods to study the simultaneous negotiation by academic communities of both qualitative methods and their professional discipline. Five theories that serve to develop this opportunity are reviewed, focusing on their related benefits and limitations, and the specific research questions they yield. The essay concludes by synthesizing distinctive commitments of this proposed research program.

  14. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  15. Global Expanded Nutrient Supply (GENuS Model: A New Method for Estimating the Global Dietary Supply of Nutrients.

    Matthew R Smith

    Full Text Available Insufficient data exist for accurate estimation of global nutrient supplies. Commonly used global datasets contain key weaknesses: 1 data with global coverage, such as the FAO food balance sheets, lack specific information about many individual foods and no information on micronutrient supplies nor heterogeneity among subnational populations, while 2 household surveys provide a closer approximation of consumption, but are often not nationally representative, do not commonly capture many foods consumed outside of the home, and only provide adequate information for a few select populations. Here, we attempt to improve upon these datasets by constructing a new model--the Global Expanded Nutrient Supply (GENuS model--to estimate nutrient availabilities for 23 individual nutrients across 225 food categories for thirty-four age-sex groups in nearly all countries. Furthermore, the model provides historical trends in dietary nutritional supplies at the national level using data from 1961-2011. We determine supplies of edible food by expanding the food balance sheet data using FAO production and trade data to increase food supply estimates from 98 to 221 food groups, and then estimate the proportion of major cereals being processed to flours to increase to 225. Next, we estimate intake among twenty-six demographic groups (ages 20+, both sexes in each country by using data taken from the Global Dietary Database, which uses nationally representative surveys to relate national averages of food consumption to individual age and sex-groups; for children and adolescents where GDD data does not yet exist, average calorie-adjusted amounts are assumed. Finally, we match food supplies with nutrient densities from regional food composition tables to estimate nutrient supplies, running Monte Carlo simulations to find the range of potential nutrient supplies provided by the diet. To validate our new method, we compare the GENuS estimates of nutrient supplies against

  16. Optimised Pre-Analytical Methods Improve KRAS Mutation Detection in Circulating Tumour DNA (ctDNA from Patients with Non-Small Cell Lung Cancer (NSCLC.

    James L Sherwood

    Full Text Available Non-invasive mutation testing using circulating tumour DNA (ctDNA is an attractive premise. This could enable patients without available tumour sample to access more treatment options.Peripheral blood and matched tumours were analysed from 45 NSCLC patients. We investigated the impact of pre-analytical variables on DNA yield and/or KRAS mutation detection: sample collection tube type, incubation time, centrifugation steps, plasma input volume and DNA extraction kits.2 hr incubation time and double plasma centrifugation (2000 x g reduced overall DNA yield resulting in lowered levels of contaminating genomic DNA (gDNA. Reduced "contamination" and increased KRAS mutation detection was observed using cell-free DNA Blood Collection Tubes (cfDNA BCT (Streck, after 72 hrs following blood draw compared to EDTA tubes. Plasma input volume and use of different DNA extraction kits impacted DNA yield.This study demonstrated that successful ctDNA recovery for mutation detection in NSCLC is dependent on pre-analytical steps. Development of standardised methods for the detection of KRAS mutations from ctDNA specimens is recommended to minimise the impact of pre-analytical steps on mutation detection rates. Where rapid sample processing is not possible the use of cfDNA BCT tubes would be advantageous.

  17. Optimising methods for community-based sea cucumber ranching: Experimental releases of cultured juvenile Holothuria scabra into seagrass meadows in Papua New Guinea

    Cathy Hair

    2016-05-01

    Full Text Available Hatchery-cultured juveniles of the commercial holothurian, sandfish (Holothuria scabra, were used for release experiments in a variety of marine habitats under traditional marine tenure near Kavieng, Papua New Guinea (PNG. Juveniles of approximately 4 g mean weight were released inside 100 m2 sea pens installed within seagrass meadows nearby partner communities, under the care of local ‘wardens’. Within each sea pen, varying levels of protection (free release, 1-day cage and 7-day cage were provided at release in order to determine if short-term predator exclusion improved survival. Ossicles of juvenile sandfish were tagged with different fluorochromes for each treatment and sandfish survival and growth was recorded after release. A range of biophysical parameters were recorded at the four sites. Contrary to expectations, short-term cage protection did not lead to higher survival at three sites, while a fourth site, despite meeting all considered criteria for suitable release habitat, experienced total loss of juveniles. There were significant differences in mean weight of juveniles between sites after four months. Multivariate analysis of biophysical factors clearly separated the sea pen habitats, strongly differentiating the best-performing site from the others. However, further research is needed to elucidate which biophysical or human factors are most useful in predicting the quality of potential sea ranch sites. Methods developed or refined through these trials could be used to establish pilot test plots at potential ranching sites to assess site suitability and provide guidance on the level of animal husbandry required before commencing community sea ranching operations in New Ireland Province, PNG. Keywords: Holothuria scabra, Mariculture, Sea pens, Predator exclusion, Principal components analysis, Biophysical variables

  18. Blazards variability detected by the spatial Fermi-LAT telescope. Study of 3C454.3 and development of an optimised light curves generation method

    Escande, L.

    2012-01-01

    The Fermi Gamma-ray Space Telescope was launched on 2008 June 11, carrying the Large Area Telescope (LAT), sensitive to gamma-rays in the 20 MeV - 300 GeV energy range. The data collected since then allowed to multiply by a factor of 10 the number of Active Galactic Nuclei (AGN) detected in the GeV range. Gamma-rays observed in AGNs come from energetic precesses bringing into play very high energy charged particles. These particles are confined in a magnetized plasma jet rising in a region close to the supermassive black hole in the center of the host galaxy. This jet moves away with velocities as high as 0.9999 c, forming in many cases radio lobes on kilo-parsec or even mega-parsec scales. Among the AGNs, those whose jet inclination angle to the line of sight is small are called blazars. The combination of this small inclination angle with relativistic ejection speeds led to relativistic effects: apparent superluminal motions, amplification of the luminosity and modification of the time scales. Blazars are characterized by extreme variability at all wavelengths, on time scales from a few minutes to several months. A temporal and spectral study of the most luminous of those detected by the LAT, 3C 454.3, was done so as to constrain emission models. A new method for generating adaptive-binning light curves is also suggested in this thesis. It allows to extract the maximum of information from the LAT data whatever the flux state of the source. (author)

  19. Optimisation of a bioindicator monitoring network using geostatic methods and a geographic information system; Optimierung eines Bioindikator-Messnetzes mit Hilfe geostatischer Methoden und eines Geographischen Informationssystems

    Forster, E.M.

    1994-04-01

    Since 1981 the Bavarian State Authority for Environment Protection has used mosses (hypnum cupressiforme) as bioindicators for determining atmospheric background pollution. Covering all of Bavaria the monitoring network comprises more than 370 sampling sites spaced 8 km apart in urban regions and otherwise 16 km. The extremely labour-intensive and costly operation of the network led to the idea to thin it in such a way that the loss of information on regional background pollution would be kept as small as possible and remain within acceptable bounds. The geostatic methods used for this purpose permit an areal estimation of pollution load giving due consideration to local values as well as a calculation of the forecast error. The present report is divided into two sections: The first deals with the determination of spatial dependences, while the second concens the selection of sampling sites. (orig./KW) [Deutsch] Seit 1981 werden Moose (Hypnum cupressiforme) als Bioindikatoren zur Ermittlung der atmosphaerischen Hintergrundbelastung vom Bayerischen Landesamt fuer Umweltschutz eingesetzt. Das landesweite Messnetz umfasst mehr als 370 Probenahmestandorte, die 16 km bzw. in Verdichtungsregionen 8 km von einander entfernt liegen. Der erhebliche personelle und finanzielle Aufwand fuehrte zu der Ueberlegung das Messnetz mit moeglichst geringem, noch aktzeptablen Informationsverlust bezueglich der grossraeumigen Hintergrundbelastung zu reduzieren. Hierbei kamen geostatistische Methoden zum Einsatz, da sie eine flaechenhafte Schaetzung der Immissionsbelastung unter Beruecksichtigung der Umgebungswerte ermoeglichen. Zudem kann der Schaetzfehler berechnet werden. Die vorliegende Arbeit gliedert sich daher in zwei Bereiche: Einerseits das Feststellen raeumlicher Abhaengigkeiten und andererseits die Auswahl der Probenahmestandorte. (orig./KW)

  20. Optimisation of decontamination method and influence of culture media on the recovery of Mycobacterium avium subspecies paratuberculosis from spiked water sediments.

    Aboagye, G; Rowe, M T

    2018-07-01

    The recovery of Mycobacterium avium subspecies paratuberculosis (Map) from the environment can be a laborious process - owing to Map being fastidious, its low number, and also high numbers of other microbial populations in such settings. Protocols i.e. filtration, decontamination and modified elution were devised to recover Map from spiked water sediments. Three culture media: Herrold's Egg Yolk Media (HEYM), Middlebrook 7H10 (M-7H10) and Bactec 12B were then employed to grow the organism following its elution. In the sterile sediment samples the recovery of Map was significant between the time of exposure for each of HEYM and M-7H10, and insignificant between both media (P < 0.05). However, in the non-sterile sediment samples, the HEYM grew other background microflora including moulds at all the times of exposure whilst 4 h followed by M-7H10 culture yielded Map colonies without any background microflora. Using sterile samples only for the Bactec 12B, the recovery of Map decreased as time of exposure increased. Based on these findings, M-7H10 should be considered for the recovery of Map from the natural environment including water sediments where the recovery of diverse microbial species remains a challenge. Map is a robust pathogen that abides in the environment. In water treatment operations, Map associates with floccules and other particulate matter including sediments. It is also a fastidious organism, and its detection and recovery from the water environment is a laborious process and can be misleading within the abundance of other mycobacterial species owing to their close resemblance in phylogenetic traits. In the absence of a reliable recovery method, Map continues to pose public health risks through biofilm in household water tanks, hence the need for the development of a reliable recovery protocol to monitor the presence of Map in water systems in order to curtail its public health risks. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Energy Savings from Optimised In-Field Route Planning for Agricultural Machinery

    Efthymios Rodias

    2017-10-01

    Full Text Available Various types of sensors technologies, such as machine vision and global positioning system (GPS have been implemented in navigation of agricultural vehicles. Automated navigation systems have proved the potential for the execution of optimised route plans for field area coverage. This paper presents an assessment of the reduction of the energy requirements derived from the implementation of optimised field area coverage planning. The assessment regards the analysis of the energy requirements and the comparison between the non-optimised and optimised plans for field area coverage in the whole sequence of operations required in two different cropping systems: Miscanthus and Switchgrass production. An algorithmic approach for the simulation of the executed field operations by following both non-optimised and optimised field-work patterns was developed. As a result, the corresponding time requirements were estimated as the basis of the subsequent energy cost analysis. Based on the results, the optimised routes reduce the fuel energy consumption up to 8%, the embodied energy consumption up to 7%, and the total energy consumption from 3% up to 8%.

  2. Layout Optimisation of Wave Energy Converter Arrays

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  3. Optimisation of the LHCb detector

    Hierck, R H

    2003-01-01

    This thesis describes a comparison of the LHCb classic and LHCb light concept from a tracking perspective. The comparison includes the detector occupancies, the various pattern recognition algorithms and the reconstruction performance. The final optimised LHCb setup is used to study the physics performance of LHCb for the Bs->DsK and Bs->DsPi decay channels. This includes both the event selection and a study of the sensitivity for the Bs oscillation frequency, delta m_s, the Bs lifetime difference, DGamma_s, and the CP parameter gamma-2delta gamma.

  4. Optimisation combinatoire Theorie et algorithmes

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  5. Coil optimisation for transcranial magnetic stimulation in realistic head geometry.

    Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J

    Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    Dominique, Stephane

    genetic algorithm that prevents new individuals to be born too close to previously evaluated solutions. The restricted area becomes smaller or larger during the optimisation to allow global or local search when necessary. Also, a new search operator named Substitution Operator is incorporated in GATE. This operator allows an ANN surrogate model to guide the algorithm toward the most promising areas of the design space. The suggested CBR approach and GATE were tested on several simple test problems, as well as on the industrial problem of designing a gas turbine engine rotor's disc. These results are compared to other results obtained for the same problems by many other popular optimisation algorithms, such as (depending of the problem) gradient algorithms, binary genetic algorithm, real number genetic algorithm, genetic algorithm using multiple parents crossovers, differential evolution genetic algorithm, Hookes & Jeeves generalized pattern search method and POINTER from the software I-SIGHT 3.5. Results show that GATE is quite competitive, giving the best results for 5 of the 6 constrained optimisation problem. GATE also provided the best results of all on problem produced by a Maximum Set Gaussian landscape generator. Finally, GATE provided a disc 4.3% lighter than the best other tested algorithm (POINTER) for the gas turbine engine rotor's disc problem. One drawback of GATE is a lesser efficiency for highly multimodal unconstrained problems, for which he gave quite poor results with respect to its implementation cost. To conclude, according to the preliminary results obtained during this thesis, the suggested CBR process, combined with GATE, seems to be a very good candidate to automate and accelerate the structural design of mechanical devices, potentially reducing significantly the cost of industrial preliminary design processes.

  7. Optimisation of Inulinase Production by Kluyveromyces bulgaricus

    Darija Vranešić

    2002-01-01

    Full Text Available The present work is based on observation of the effects of pH and temperature of fermentation on the production of microbial enzyme inulinase by Kluyveromyces marxianus var. bulgaricus. Inulinase hydrolyzes inulin, a polysaccharide which can be isolated from plants such as Jerusalem artichoke, chicory or dahlia, and transformed into pure fructose or fructooligosaccharides. Fructooligosaccharides have great potential in food industry because they can be used as calorie-reduced compounds and noncariogenic sweeteners as well as soluble fibre and prebiotic compounds. Fructose formation from inulin is a single step enzymatic reaction and yields are up to 95 % the fructose. On the contrary, conventional fructose production from starch needs at least three enzymatic steps, yielding only 45 % of fructose. The process of inulinase production was optimised by using experimental design method. pH value of the cultivation medium showed to be the most significant variable and it should be maintained at optimum value of 3.6. The effect of temperature was slightly lower and optimal values were between 30 and 33 °C. At a low pH value of the cultivation medium, the microorganism was not able to producem enough enzyme and enzyme activities were low. Similar effect was caused by high temperature. The highest values of enzyme activities were achieved at optimal fermentation conditions and the values were: 100.16–124.36 IU/mL (with sucrose as substrate for determination of enzyme activity or 8.6–11.6 IU/mL (with inulin as substrate, respectively. The method of factorial design and response surface analysis makes it possible to study several factors simultaneously, to quantify the individual effect of each factor and to investigate their possible interactions. As a comparison to this method, optimisation of a physiological enzyme activity model depending on pH and temperature was also studied.

  8. SINGLE FIXED CRANE OPTIMISATION WITHIN A DISTRIBUTION CENTRE

    J. Matthews

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper considersthe optimisation of the movement of a fixed crane operating in a single aisle of a distribution centre. The crane must move pallets in inventory between docking bays, storage locations, and picking lines. Both a static and a dynamic approach to the problem are presented. The optimisation is performed by means of tabu search, ant colony metaheuristics,and hybrids of these two methods. All these solution approaches were tested on real life data obtained from an operational distribution centre. Results indicate that the hybrid methods outperform the other approaches.

    AFRIKAANSE OPSOMMING: Die optimisering van die beweging van 'n vaste hyskraan in 'n enkele gang van 'n distribusiesentrum word in hierdie artikel beskou. Die hyskraan moet pallette vervoer tussen dokhokke, stoorposisies, en opmaaklyne. Beide 'n statiese en 'n dinamiese benadering tot die probleem word aangebied. Die optimisering word gedoen met behulp van tabu-soektogte, mierkolonieoptimisering,en hibriede van hierdie twee metodes. Al die oplossingsbenaderings is getoets met werklike data wat van 'n operasionele distribusiesentrum verkry is. Die resultate toon aan dat die hibriedmetodes die beste oplossings lewer.

  9. A high-throughput and sensitive method to measure Global DNA Methylation: Application in Lung Cancer

    Mamaev Sergey

    2008-08-01

    Full Text Available Abstract Background Genome-wide changes in DNA methylation are an epigenetic phenomenon that can lead to the development of disease. The study of global DNA methylation utilizes technology that requires both expensive equipment and highly specialized skill sets. Methods We have designed and developed an assay, CpGlobal, which is easy-to-use, does not utilize PCR, radioactivity and expensive equipment. CpGlobal utilizes methyl-sensitive restriction enzymes, HRP Neutravidin to detect the biotinylated nucleotides incorporated in an end-fill reaction and a luminometer to measure the chemiluminescence. The assay shows high accuracy and reproducibility in measuring global DNA methylation. Furthermore, CpGlobal correlates significantly with High Performance Capillary Electrophoresis (HPCE, a gold standard technology. We have applied the technology to understand the role of global DNA methylation in the natural history of lung cancer. World-wide, it is the leading cause of death attributed to any cancer. The survival rate is 15% over 5 years due to the lack of any clinical symptoms until the disease has progressed to a stage where cure is limited. Results Through the use of cell lines and paired normal/tumor samples from patients with non-small cell lung cancer (NSCLC we show that global DNA hypomethylation is highly associated with the progression of the tumor. In addition, the results provide the first indication that the normal part of the lung from a cancer patient has already experienced a loss of methylation compared to a normal individual. Conclusion By detecting these changes in global DNA methylation, CpGlobal may have a role as a barometer for the onset and development of lung cancer.

  10. Surface Polymerisation Methods for Optimised Adhesion

    Drews, Joanna Maria

    Arbejdet har fokuseret på muligheder for at forstærke kompositmaterialer til højteknologiske anvendelser fx til vindmøllevinger. Forskningen har derfor været centreret om plasma polymerisation af henholdsvis maleic anhydrid (MAH) og 1,2-methylenedioxybenzen til tynde film på modelkulstofsubstrate...

  11. Optimising resource management in neurorehabilitation.

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  12. A new method to estimate global mass transport and its implication for sea level rise

    Yi, S.; Heki, K.

    2017-12-01

    Estimates of changes in global land mass by using GRACE observations can be achieved by two methods, a mascon method and a forward modeling method. However, results from these two methods show inconsistent secular trend. Sea level budget can be adopted to validate the consistency among observations of sea level rise by altimetry, steric change by the Argo project, and mass change by GRACE. Mascon products from JPL, GSFC and CSR are compared here, we find that all these three products cannot achieve a reconciled sea level budget, while this problem can be solved by a new forward modeling method. We further investigate the origin of this difference, and speculate that it is caused by the signal leakage from the ocean mass. Generally, it is well recognized that land signals leak into oceans, but it also happens the other way around. We stress the importance of correction of leakage from the ocean in the estimation of global land masses. Based on a reconciled sea level budget, we confirmed that global sea level rise has been accelerating significantly over 2005-2015, as a result of the ongoing global temperature increase.

  13. VEHICLE DRIVING CYCLE OPTIMISATION ON THE HIGHWAY

    Zinoviy STOTSKO

    2016-06-01

    Full Text Available This paper is devoted to the problem of reducing vehicle energy consumption. The authors consider the optimisation of highway driving cycle a way to use the kinetic energy of a car more effectively at various road conditions. The model of a vehicle driving control at the highway which consists of elementary cycles, such as accelerating, free rolling and deceleration under forces of external resistance, was designed. Braking, as an energy dissipation regime, was not included. The influence of the various longitudinal profiles of the road was taken into consideration and included in the model. Ways to use the results of monitoring road and traffic conditions are presented. The method of non-linear programming is used to design the optimal vehicle control function and phase trajectory. The results are presented by improved typical driving cycles that present energy saving as a subject of choice at a specified schedule.

  14. Optimisation algorithms for ECG data compression.

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  15. Value Chain Optimisation of Biogas Production

    Jensen, Ida Græsted

    economically feasible. In this PhD thesis, the focus is to create models for investigating the profitability of biogas projects by: 1) including the whole value chain in a mathematical model and considering mass and energy changes on the upstream part of the chain; and 2) including profit allocation in a value......, the costs on the biogas plant has been included in the model using economy of scale. For the second point, a mathematical model considering profit allocation was developed applying three allocation mechanisms. This mathematical model can be applied as a second step after the value chain optimisation. After...... in the energy systems model to find the optimal end use of each type of gas and fuel. The main contributions of this thesis are the methods developed on plant level. Both the mathematical model for the value chain and the profit allocation model can be generalised and used in other industries where mass...

  16. The bases for optimisation of scheduled repairs and tests of safety systems to improve the NPP productive efficiency

    Bilej, D.V.; Vasil'chenko, S.V.; Vlasenko, N.I.; Vasil'chenko, V.N.; Skalozubov, V.I.

    2004-01-01

    In the frames of risk-informed approaches the paper proposed the theoretical bases for methods of optimisation of scheduled repairs and tests of safety systems at nuclear power plants. The optimisation criterion is the objective risk function minimising. This function depends on the scheduled repairs/tests periodicity and the allowed time to bring the system channel to a state of non-operability. The main optimisation direct is to reduce the repair time with the purpose of enhancement of productive efficiency

  17. Optimisation and validation of a HS-SPME-GC-IT/MS method for analysis of carbonyl volatile compounds as biomarkers in human urine: Application in a pilot study to discriminate individuals with smoking habits.

    Calejo, Isabel; Moreira, Nathalie; Araújo, Ana Margarida; Carvalho, Márcia; Bastos, Maria de Lourdes; de Pinho, Paula Guedes

    2016-02-01

    A new and simple analytical approach consisting of an automated headspace solid-phase microextraction (HS-SPME) sampler coupled to gas chromatography-ion trap/mass spectrometry detection (GC-IT/MS) with a prior derivatization step with O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was developed to detect volatile carbonyl metabolites with low molecular weights in human urine. A central composite design (CCD) was used to optimise the PFBHA concentration and extraction conditions that affect the efficiency of the SPME procedure. With a sample volume of 1 mL, optimal conditions were achieved by adding 300 mg/L of PFBHA and allowing the sample to equilibrate for 6 min at 62°C and then extracting the samples for 51 min at the same temperature, using a divinylbenzene/polydimethylsiloxane (DVB/PDMS) fibre. The method allowed the simultaneous identification and quantification of 44 carbonyl compounds consisting of aldehydes, dialdehydes, heterocyclic aldehydes and ketones. The method was validated with regards to the linearity, inter- and intra-day precision and accuracy. The detection limits ranged from 0.009 to 0.942 ng/mL, except for 4-hydroxy-2-nonenal (15 ng/mL), and the quantification limits varied from 0.029 to 1.66 ng/mL, except for butanal (2.78 ng/mL), 2-butanone (2.67 ng/mL), 4-heptanone (3.14 ng/mL) and 4-hydroxy-2-nonenal (50.0 ng/mL). The method accuracy was satisfactory, with recoveries ranging from 90 to 107%. The proof of applicability of the methodology was performed in a pilot target analysis of urine samples obtained from 18 healthy smokers and 18 healthy non-smokers (control group). Chemometric supervised analysis was performed using the volatile patterns acquired for these samples and clearly showed the potential of the volatile carbonyl profiles to discriminate urine from smoker and non-smoker subjects. 5-Methyl-2-furfural (p<0.0001), 2-methylpropanal, nonanal and 2-methylbutanal (p<0.05) were identified as potentially useful

  18. Study on the evolutionary optimisation of the topology of network control systems

    Zhou, Zude; Chen, Benyuan; Wang, Hong; Fan, Zhun

    2010-08-01

    Computer networks have been very popular in enterprise applications. However, optimisation of network designs that allows networks to be used more efficiently in industrial environment and enterprise applications remains an interesting research topic. This article mainly discusses the topology optimisation theory and methods of the network control system based on switched Ethernet in an industrial context. Factors that affect the real-time performance of the industrial control network are presented in detail, and optimisation criteria with their internal relations are analysed. After the definition of performance parameters, the normalised indices for the evaluation of the topology optimisation are proposed. The topology optimisation problem is formulated as a multi-objective optimisation problem and the evolutionary algorithm is applied to solve it. Special communication characteristics of the industrial control network are considered in the optimisation process. In respect to the evolutionary algorithm design, an improved arena algorithm is proposed for the construction of the non-dominated set of the population. In addition, for the evaluation of individuals, the integrated use of the dominative relation method and the objective function combination method, for reducing the computational cost of the algorithm, are given. Simulation tests show that the performance of the proposed algorithm is preferable and superior compared to other algorithms. The final solution greatly improves the following indices: traffic localisation, traffic balance and utilisation rate balance of switches. In addition, a new performance index with its estimation process is proposed.

  19. Optimising mobile phase composition, its flow-rate and column temperature in HPLC using taboo search.

    Guillaume, Y C; Peyrin, E

    2000-03-06

    A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.

  20. Interactive overlays: a new method for generating global journal maps from Web-of-Science data

    Leydesdorff, L.; Rafols, I.

    2012-01-01

    Recent advances in methods and techniques enable us to develop interactive overlays to a global map of science based on aggregated citation relations among the 9162 journals contained in the Science Citation Index and Social Science Citation Index 2009. We first discuss the pros and cons of the

  1. A global calibration method for multiple vision sensors based on multiple targets

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  2. Global Learning in a Geography Course Using the Mystery Method as an Approach to Complex Issues

    Applis, Stefan

    2014-01-01

    In the study which is the foundation of this essay, the question is examined of whether the complexity of global issues can be solved at the level of teaching methodology. In this context, the first qualitative and constructive study was carried out which researches the Mystery Method using the Thinking-Through-Geography approach (David Leat,…

  3. A method to determine the necessity for global signal regression in resting-state fMRI studies.

    Chen, Gang; Chen, Guangyu; Xie, Chunming; Ward, B Douglas; Li, Wenjun; Antuono, Piero; Li, Shi-Jiang

    2012-12-01

    In resting-state functional MRI studies, the global signal (operationally defined as the global average of resting-state functional MRI time courses) is often considered a nuisance effect and commonly removed in preprocessing. This global signal regression method can introduce artifacts, such as false anticorrelated resting-state networks in functional connectivity analyses. Therefore, the efficacy of this technique as a correction tool remains questionable. In this article, we establish that the accuracy of the estimated global signal is determined by the level of global noise (i.e., non-neural noise that has a global effect on the resting-state functional MRI signal). When the global noise level is low, the global signal resembles the resting-state functional MRI time courses of the largest cluster, but not those of the global noise. Using real data, we demonstrate that the global signal is strongly correlated with the default mode network components and has biological significance. These results call into question whether or not global signal regression should be applied. We introduce a method to quantify global noise levels. We show that a criteria for global signal regression can be found based on the method. By using the criteria, one can determine whether to include or exclude the global signal regression in minimizing errors in functional connectivity measures. Copyright © 2012 Wiley Periodicals, Inc.

  4. Optimised Design and Analysis of All-Optical Networks

    Glenstrup, Arne John

    2002-01-01

    through various experiments and is shown to produce good results and to be able to scale up to networks of realistic sizes. A novel method, subpath wavelength grouping, for routing connections in a multigranular all-optical network where several wavelengths can be grouped and switched at band and fibre......This PhD thesis presents a suite of methods for optimising design and for analysing blocking probabilities of all-optical networks. It thus contributes methodical knowledge to the field of computer assisted planning of optical networks. A two-stage greenfield optical network design optimiser...... is developed, based on shortest-path algorithms and a comparatively new metaheuristic called simulated allocation. It is able to handle design of all-optical mesh networks with optical cross-connects, considers duct as well as fibre and node costs, and can also design protected networks. The method is assessed...

  5. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  6. Mixed multiscale finite element methods using approximate global information based on partial upscaling

    Jiang, Lijian

    2009-10-02

    The use of limited global information in multiscale simulations is needed when there is no scale separation. Previous approaches entail fine-scale simulations in the computation of the global information. The computation of the global information is expensive. In this paper, we propose the use of approximate global information based on partial upscaling. A requirement for partial homogenization is to capture long-range (non-local) effects present in the fine-scale solution, while homogenizing some of the smallest scales. The local information at these smallest scales is captured in the computation of basis functions. Thus, the proposed approach allows us to avoid the computations at the scales that can be homogenized. This results in coarser problems for the computation of global fields. We analyze the convergence of the proposed method. Mathematical formalism is introduced, which allows estimating the errors due to small scales that are homogenized. The proposed method is applied to simulate two-phase flows in heterogeneous porous media. Numerical results are presented for various permeability fields, including those generated using two-point correlation functions and channelized permeability fields from the SPE Comparative Project (Christie and Blunt, SPE Reserv Evalu Eng 4:308-317, 2001). We consider simple cases where one can identify the scales that can be homogenized. For more general cases, we suggest the use of upscaling on the coarse grid with the size smaller than the target coarse grid where multiscale basis functions are constructed. This intermediate coarse grid renders a partially upscaled solution that contains essential non-local information. Numerical examples demonstrate that the use of approximate global information provides better accuracy than purely local multiscale methods. © 2009 Springer Science+Business Media B.V.

  7. Dose optimisation in computed radiography

    Schreiner-Karoussou, A.

    2005-01-01

    After the installation of computed radiography (CR) systems in three hospitals in Luxembourg a patient dose survey was carried out for three radiographic examinations, thorax, pelvis and lumbar spine. It was found that the patient doses had changed in comparison with the patient doses measured for conventional radiography in the same three hospitals. A close collaboration between the manufacturers of the X-ray installations, the CR imaging systems and the medical physicists led to the discovery that the speed class with which each radiographic examination was to be performed, had been ignored, during installation of the digital imaging systems. A number of procedures were carried out in order to calibrate and program the X-ray installations in conjunction with the CR systems. Following this optimisation procedure, a new patient dose survey was carried out for the three radiographic examinations. It was found that patient doses for the three hospitals were reduced. (authors)

  8. Optimising costs in WLCG operations

    Pradillo, Mar; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-01-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastruc...

  9. The radiation protection optimisation in contrast X-ray diagnostic techniques

    Markovic, S.; Pavlovic, R.

    1995-01-01

    In the class of artificial sources, X-ray diagnostic techniques irradiate global population with more than 90 % share in total dose. At the same time this is the only area with high possibilities in collective dose reduction without important investments. Exposure of the medical team is mainly related to unnecessary irradiation. Eliminating this unnecessary irradiation quality of diagnostic information remains undisturbed. From the radiation protection point of view the most critical X-ray diagnostic method is angiography. This paper presents the radiation protection optimisation calculation of the protective lead thickness using the Cost - Benefit analysis technique. The obtained numerical results are based on calculated collective dose, the estimated prices of the lead and lead glass thickness and the adopted price for monetary value of the collective dose unit α. (author) 3 figs., 10 refs

  10. The radiation protection optimisation in contrast X-ray diagnostic techniques

    Markovic, S; Pavlovic, R [Inst. of Nuclear Science Vinca, Belgrade (Yugoslavia). Radiation and Environmental Protection Lab.; Boreli, F [Fac. of Electrical Engineering, Belgrade (Yugoslavia)

    1996-12-31

    In the class of artificial sources, X-ray diagnostic techniques irradiate global population with more than 90 % share in total dose. At the same time this is the only area with high possibilities in collective dose reduction without important investments. Exposure of the medical team is mainly related to unnecessary irradiation. Eliminating this unnecessary irradiation quality of diagnostic information remains undisturbed. From the radiation protection point of view the most critical X-ray diagnostic method is angiography. This paper presents the radiation protection optimisation calculation of the protective lead thickness using the Cost - Benefit analysis technique. The obtained numerical results are based on calculated collective dose, the estimated prices of the lead and lead glass thickness and the adopted price for monetary value of the collective dose unit {alpha}. (author) 3 figs., 10 refs.

  11. CONSTRUCTION THEORY AND NOISE ANALYSIS METHOD OF GLOBAL CGCS2000 COORDINATE FRAME

    Z. Jiang

    2018-04-01

    Full Text Available The definition, renewal and maintenance of geodetic datum has been international hot issue. In recent years, many countries have been studying and implementing modernization and renewal of local geodetic reference coordinate frame. Based on the precise result of continuous observation for recent 15 years from state CORS (continuously operating reference system network and the mainland GNSS (Global Navigation Satellite System network between 1999 and 2007, this paper studies the construction of mathematical model of the Global CGCS2000 frame, mainly analyzes the theory and algorithm of two-step method for Global CGCS2000 Coordinate Frame formulation. Finally, the noise characteristic of the coordinate time series are estimated quantitatively with the criterion of maximum likelihood estimation.

  12. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  13. The optimization of treatment and management of schizophrenia in Europe (OPTiMiSE) trial

    Leucht, Stefan; Winter-van Rossum, Inge; Heres, Stephan

    2015-01-01

    Commission sponsored "Optimization of Treatment and Management of Schizophrenia in Europe" (OPTiMiSE) trial which aims to provide a treatment algorithm for patients with a first episode of schizophrenia. METHODS: We searched Pubmed (October 29, 2014) for randomized controlled trials (RCTs) that examined...... switching the drug in nonresponders to another antipsychotic. We described important methodological choices of the OPTiMiSE trial. RESULTS: We found 10 RCTs on switching antipsychotic drugs. No trial was conclusive and none was concerned with first-episode schizophrenia. In OPTiMiSE, 500 first episode...

  14. The global equilibrium method and its hybrid implementation for identifying heterogeneous elastic material parameters

    Lubineau, Gilles

    2011-04-01

    New identification strategies have to be developed in order to perform the identification quickly and at very-low cost. A popular class of approaches relies on full-field measurement obtained through digital image correlation. We propose here a global equilibrium approach. It is based on the virtual field method in case specific virtual fields are used. It can also be seen as a generalization of the equilibrium gap method. This approach is easy to implement and we prove that it provides better or comparable results to the constitutive equation gap method that is known to be a very accurate reference. © 2010 Elsevier B.V.

  15. The Mechanism of Russian Nanoindustry Development Caused by Globalization: Methods and Tools

    Avtonomova Oksana Alekseevna

    2015-05-01

    Full Text Available Establishing the effective mechanism of the Russian nanoindustry functioning by means of globalization benefits is one of the most important factors of raising the national economy competitiveness in the context of transition to a new technological way. Nanotechnologies is one of the key factors of this new way. The mechanism of nanoindustrial development of the Russian Federation caused by globalization is characterized in the article as a way of task-oriented implementation and management of the global nanotechnology industry development on the basis of cooperation of entities at different levels of the global economic system using the appropriate methods, tools, resources and communication channels, factors and capitals. The mechanism aims at adjusting the described contradictions faced by Russian entities in their business activities in the sphere of production, consumption and promotion of nanotechnologies, nanogoods and nanoservices. Within the framework of a theoretical research the author proposes the classification of methods and tools for the development of the Russian nanoindustry through the international cooperation by the criteria of economic functions: planning, institution, organization, management, investment, finance, information, analysis, control – all aimed at promoting the unification of concepts and actions of collaborating entities in the sphere of nanotechnology. The developed methodology of the international nanoindustrial interaction of Russian entities includes the result-oriented, institutional, organizational, budgetary, investment, tax, informative, and administrative methods, as well as analysis, audit, accounting and evaluation. Besides, the article proves the feasibility of marketing tools application in the sphere of nanoindustrial cooperation aimed at developing a more efficient strategy of promoting products with nanofeatures to the global market.

  16. Techno-economic optimisation of energy systems; Contribution a l'optimisation technico-economique de systemes energetiques

    Mansilla Pellen, Ch

    2006-07-15

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  17. Methods for simultaneously identifying coherent local clusters with smooth global patterns in gene expression profiles

    Lee Yun-Shien

    2008-03-01

    Full Text Available Abstract Background The hierarchical clustering tree (HCT with a dendrogram 1 and the singular value decomposition (SVD with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.

  18. A method for improving global pyranometer measurements by modeling responsivity functions

    Lester, A. [Smith College, Northampton, MA 01063 (United States); Myers, D.R. [National Renewable Energy Laboratory, 1617 Cole Blvd., Golden, CO 80401 (United States)

    2006-03-15

    Accurate global solar radiation measurements are crucial to climate change research and the development of solar energy technologies. Pyranometers produce an electrical signal proportional to global irradiance. The signal-to-irradiance ratio is the responsivity (RS) of the instrument (RS=signal/irradiance=microvolts/(W/m{sup 2})). Most engineering measurements are made using a constant RS. It is known that RS varies with day of year, zenith angle, and net infrared radiation. This study proposes a method to find an RS function to model a pyranometer's changing RS. Using a reference irradiance calculated from direct and diffuse instruments, we found instantaneous RS for two global pyranometers over 31 sunny days in a two-year period. We performed successive independent regressions of the error between the constant and instantaneous RS with respect to zenith angle, day of year, and net infrared to obtain an RS function. An alternative method replaced the infrared regression with an independently developed technique to account for thermal offset. Results show improved uncertainties with the function method than with the single-calibration value. Lower uncertainties also occur using a black-and-white (8-48), rather than all-black (PSP), shaded pyranometer as the diffuse reference instrument. We conclude that the function method is extremely effective in reducing uncertainty in the irradiance measurements for global PSP pyranometers if they are calibrated at the deployment site. Furthermore, it was found that the function method accounts for the pyranometer's thermal offset, rendering further corrections unnecessary. The improvements in irradiance data achieved in this study will serve to increase the accuracy of solar energy assessments and atmospheric research. (author)

  19. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  20. PHYSICAL-MATEMATICALSCIENCE MECHANICS SIMULATION CHALLENGES IN OPTIMISING THEORETICAL METAL CUTTING TASKS

    Rasul V. Guseynov

    2017-01-01

    Full Text Available Abstract. Objectives In the article, problems in the optimising of machining operations, which provide end-unit production of the required quality with a minimum processing cost, are addressed. Methods Increasing the effectiveness of experimental research was achieved through the use of mathematical methods for planning experiments for optimising metal cutting tasks. The minimal processing cost model, in which the objective function is polynomial, is adopted as a criterion for the selection of optimal parameters. Results Polynomial models of the influence of angles φ, α, γ on the torque applied when cutting threads in various steels are constructed. Optimum values of the geometrical tool parameters were obtained using the criterion of minimum cutting forces during processing. The high stability of tools having optimal geometric parameters is determined. It is shown that the use of experimental planning methods allows the optimisation of cutting parameters. In optimising solutions to metal cutting problems, it is found to be expedient to use multifactor experimental planning methods and to select the cutting force as the optimisation parameter when determining tool geometry. Conclusion The joint use of geometric programming and experiment planning methods in order to optimise the parameters of cutting significantly increases the efficiency of technological metal processing approaches. 

  1. Numerical optimisation of an axial turbine; Numerische Optimierung einer Axialturbine

    Welzel, B.

    1998-12-31

    The author presents a method for automatic shape optimisation of components with internal or external flow. The method combines a program for numerical calculation of frictional turbulent flow with an optimisation algorithm. Algorithms are a simplex search strategy and an evolution strategy. The shape of the component to be optimized is variable due to shape parameters modified by the algorithm. For each shape, a flow calculation is carried out on whose basis a functional value like performance, loss, lift or resistivity is calculated. For validation, the optimisation method is used in simple examples with known solutions. It is applied. It is applied to the components of a slow-running axial turbine. Components with accelerated and delayed rotationally symmetric flow and 2D blade profiles are optimized. [Deutsch] Es wird eine Methode zur automatischen Formoptimierung durchstroemter oder umstroemter Bauteile vorgestellt. Diese koppelt ein Programm zur numerischen Berechnung reibungsbehafteter turbulenter Stroemungen mit einem Optimierungsalgorithmus. Dabei kommen als Algorithmen eine Simplex-Suchstrategie und eine Evolutionsstrategie zum Einsatz. Die Form des zu optimierenden Koerpers ist durch Formparameter, die vom Algorithmus veraendert werden, variabel. Fuer jede Form wird eine Stroemungsberechnung durchgefuehrt und mit dieser ein Funktionswert wie Wirkungsgrad, Verlust, Auftrieb oder Widerstandskraft berechnet. Die Optimierungsmethode wird zur Validierung in einfachen Beispielen mit bekannter Loesung eingesetzt. Zur Anwendung kommt sie in den einzelnen Komponenten einer langsamlaeufigen Axialturbine. Es werden Bauteile mit beschleunigter und verzoegerter rotationssymmetrischer Stroemung und 2D-Schaufelprofile optimiert. (orig.)

  2. Research Synthesis Methods in an Age of Globalized Risks: Lessons from the Global Burden of Foodborne Disease Expert Elicitation

    Hald, Tine; Angulo, Fred; Bin Hamzah, Wan Mansor

    2016-01-01

    We live in an age that increasingly calls for national or regional management of global risks. This article discusses the contributions that expert elicitation can bring to efforts to manage global risks and identifies challenges faced in conducting expert elicitation at this scale. In doing so...... it draws on lessons learned from conducting an expert elicitation as part of the World Health Organizations (WHO) initiative to estimate the global burden of foodborne disease; a study commissioned by the Foodborne Disease Epidemiology Reference Group (FERG). Expert elicitation is designed to fill gaps...

  3. A new method to generate a high-resolution global distribution map of lake chlorophyll

    Sayers, Michael J; Grimm, Amanda G.; Shuchman, Robert A.; Deines, Andrew M.; Bunnell, David B.; Raymer, Zachary B; Rogers, Mark W.; Woelmer, Whitney; Bennion, David; Brooks, Colin N.; Whitley, Matthew A.; Warner, David M.; Mychek-Londer, Justin G.

    2015-01-01

    A new method was developed, evaluated, and applied to generate a global dataset of growing-season chlorophyll-a (chl) concentrations in 2011 for freshwater lakes. Chl observations from freshwater lakes are valuable for estimating lake productivity as well as assessing the role that these lakes play in carbon budgets. The standard 4 km NASA OceanColor L3 chlorophyll concentration products generated from MODIS and MERIS sensor data are not sufficiently representative of global chl values because these can only resolve larger lakes, which generally have lower chl concentrations than lakes of smaller surface area. Our new methodology utilizes the 300 m-resolution MERIS full-resolution full-swath (FRS) global dataset as input and does not rely on the land mask used to generate standard NASA products, which masks many lakes that are otherwise resolvable in MERIS imagery. The new method produced chl concentration values for 78,938 and 1,074 lakes in the northern and southern hemispheres, respectively. The mean chl for lakes visible in the MERIS composite was 19.2 ± 19.2, the median was 13.3, and the interquartile range was 3.90–28.6 mg m−3. The accuracy of the MERIS-derived values was assessed by comparison with temporally near-coincident and globally distributed in situmeasurements from the literature (n = 185, RMSE = 9.39, R2 = 0.72). This represents the first global-scale dataset of satellite-derived chl estimates for medium to large lakes.

  4. The q-G method : A q-version of the Steepest Descent method for global optimization.

    Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M

    2015-01-01

    In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.

  5. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  6. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  7. Producing accurate wave propagation time histories using the global matrix method

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  8. Comparison of several methods to calculate sunshine hours from global radiation; Vergelijking van diverse methodes voor de berekening van zonneschijnduur uit globale straling

    Schipper, J.

    2004-07-01

    non-scientific use of sunshine hour data. As the sensor can also be used to make accurate solar radiation measurements, scientific data can be collected at the same time. Further development resulted in an alteration of this algorithm in 1993 (Algorithm Bergman). In this study the results for 'sunshine-duration' calculated by both algorithms will be compared to measurements from the Campbell-Stokes. The results of the comparison will be discussed and concluding remarks will be given. [Dutch] Het KNMI heeft de uurlijkse waarden van zonneschijnduur vanaf 1899 opgeslagen in de klimatologische database. Deze waarden zijn de basis voor tal van klimatologische producten. Per 1-1-1993 is op alle KNMI-stations overgeschakeld op een andere methode om de zonneschijnduur te bepalen. Sinds die datum wordt niet langer de oude en vertrouwde Campbell-Stokes (brandspoor) methode gebruikt, maar is overgeschakeld op een indirecte methode, gebaseerd op globale straling. Redenen voor vernieuwing waren de nieuwe richtlijnen van het WMO. Deze trok in september 1989 de Campbell-Stokes terug als standaard meetinstrument voor de bepaling van zonneschijnduur vanwege de te grote meetonzekerheid en de klimaatgevoeligheid van de Campbell-Stokes. Het KNMI heeft daarom gekozen voor een meer indirecte methode. Slob ontwikkelde hiertoe een methode waarbij de zonneschijnduur wordt bepaald met behulp van een herleiding uit metingen van de globale straling, in het bijzonder uit de 10-minuten registraties van de waarden gemiddelde, maximum en minimum globale straling, het zogeheten 'algoritme Slob'. Met het algoritme wordt uit de genoemde informatie de zonneschijnduur per 10'-vak bepaald. De 10'-waarden zijn de basis voor de berekening van de uurwaarden en de etmaalwaarden zonneschijnduur. De methode is in de periode 1991-1993 operationeel ingevoerd. De nieuwe technieken van inzamelen maakte het mogelijk om m.b.v. hogere tijdsresolutie (10'- basis) een nauwkeurige

  9. Combining simulation and multi-objective optimisation for equipment quantity optimisation in container terminals

    Lin, Zhougeng

    2013-01-01

    This thesis proposes a combination framework to integrate simulation and multi-objective optimisation (MOO) for container terminal equipment optimisation. It addresses how the strengths of simulation and multi-objective optimisation can be integrated to find high quality solutions for multiple objectives with low computational cost. Three structures for the combination framework are proposed respectively: pre-MOO structure, integrated MOO structure and post-MOO structure. The applications of ...

  10. Contribution à l'optimisation de la conduite des procédés alimentaires

    Olmos-Perez , Alejandra

    2003-01-01

    The main objective of this work is the development of a methodology to calculate the optimal operating conditions applicable in food processes control. In the first part of this work, we developed an optimization strategy in two stages. Firstly, the optimisation problem is constructed. Afterwards, a feasible optimisation method is chosen to solve the problem. This choice is made through a decisional diagram, which proposes a deterministic (sequential quadratic programming, SQP), a stochastic ...

  11. Layout Optimisation of Wave Energy Converter Arrays

    Pau Mercadé Ruiz

    2017-08-01

    Full Text Available This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA, a genetic algorithm (GA and the glowworm swarm optimisation (GSO algorithm. The results show slightly higher performances for the latter two algorithms; however, the first turns out to be significantly less computationally demanding.

  12. Topology optimisation of natural convection problems

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations...... coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences...... in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach...

  13. Topology Optimisation for Coupled Convection Problems

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  14. Benchmarks for dynamic multi-objective optimisation

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  15. Credit price optimisation within retail banking

    2014-02-14

    Feb 14, 2014 ... cost based pricing, where the price of a product or service is based on the .... function obtained from fitting a logistic regression model .... Note that the proposed optimisation approach below will allow us to also incorporate.

  16. Probing the parameter space of HD 49933: A comparison between global and local methods

    Creevey, O L [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Bazot, M, E-mail: orlagh@iac.es, E-mail: bazot@astro.up.pt [Centro de Astrofisica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal)

    2011-01-01

    We present two independent methods for studying the global stellar parameter space (mass M, age, chemical composition X{sub 0}, Z{sub 0}) of HD 49933 with seismic data. Using a local minimization and an MCMC algorithm, we obtain consistent results for the determination of the stellar properties: M 1.1-1.2 M{sub sun} Age {approx} 3.0 Gyr, Z{sub 0} {approx} 0.008. A description of the error ellipses can be defined using Singular Value Decomposition techniques, and this is validated by comparing the errors with those from the MCMC method.

  17. User perspectives in public transport timetable optimisation

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    The present paper deals with timetable optimisation from the perspective of minimising the waiting time experienced by passengers when transferring either to or from a bus. Due to its inherent complexity, this bi-level minimisation problem is extremely difficult to solve mathematically, since tim...... on the large-scale public transport network in Denmark. The timetable optimisation approach yielded a yearly reduction in weighted waiting time equivalent to approximately 45 million Danish kroner (9 million USD)....

  18. Reference methods and materials. A programme of support for regional and global marine pollution assessments

    1990-01-01

    This document describes a programme of comprehensive support for regional and global marine pollution assessments developed by the United Nations Environment Programme (UNEP) in cooperation with the International Atomic Energy Agency (IAEA) and the Intergovernmental Oceanographic Commission (IOC) and with the collaboration of a number of other United Nations Specialized agencies including the Food and Agriculture Organisation (FAO), the World Meteorological Organisation (WMO), the World Health Organisation (WHO) and the International Maritime Organisation (IMO). Two of the principle components of this programme, Reference Methods and Reference materials are given special attention in this document and a full Reference Method catalogue is included, giving details of over 80 methods currently available or in an advanced stage of preparation and testing. It is important that these methods are seen as a functional component of a much wider strategy necessary for assuring good quality and intercomparable data for regional and global pollution monitoring and the user is encouraged to read this document carefully before employing Reference Methods and Reference Materials in his/her laboratory. 3 figs

  19. Global Monitoring of Water Supply and Sanitation: History, Methods and Future Challenges

    Bartram, Jamie; Brocklehurst, Clarissa; Fisher, Michael B.; Luyendijk, Rolf; Hossain, Rifat; Wardlaw, Tessa; Gordon, Bruce

    2014-01-01

    International monitoring of drinking water and sanitation shapes awareness of countries’ needs and informs policy, implementation and research efforts to extend and improve services. The Millennium Development Goals established global targets for drinking water and sanitation access; progress towards these targets, facilitated by international monitoring, has contributed to reducing the global disease burden and increasing quality of life. The experiences of the MDG period generated important lessons about the strengths and limitations of current approaches to defining and monitoring access to drinking water and sanitation. The methods by which the Joint Monitoring Programme (JMP) of WHO and UNICEF tracks access and progress are based on analysis of data from household surveys and linear regression modelling of these results over time. These methods provide nationally-representative and internationally-comparable insights into the drinking water and sanitation facilities used by populations worldwide, but also have substantial limitations: current methods do not address water quality, equity of access, or extra-household services. Improved statistical methods are needed to better model temporal trends. This article describes and critically reviews JMP methods in detail for the first time. It also explores the impact of, and future directions for, international monitoring of drinking water and sanitation. PMID:25116635

  20. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  1. Compatibility of global environmental assessment methods of buildings with an Egyptian energy code

    Amal Kamal Mohamed Shamseldin

    2017-04-01

    Full Text Available Several environmental assessment methods of buildings had emerged over the world to set environmental classifications for buildings, such as the American method “Leadership in Energy and Environmental Design” (LEED the most widespread one. Several countries decided to put their own assessment methods to catch up with the previous orientation, such as Egypt. The main goal of putting the Egyptian method was to impose the voluntary local energy efficiency codes. Through a local survey, it was clearly noted that many of the construction makers in Egypt do not even know the local method, and whom are interested in the environmental assessment of buildings seek to apply LEED rather than anything else. Therefore, several questions appear about the American method compatibility with the Egyptian energy codes – that contain the most exact characteristics and requirements and give the outmost credible energy efficiency results for buildings in Egypt-, and the possibility of finding another global method that gives closer results to those of the Egyptian codes, especially with the great variety of energy efficiency measurement approaches used among the different assessment methods. So, the researcher is trying to find the compatibility of using non-local assessment methods with the local energy efficiency codes. Thus, if the results are not compatible, the Egyptian government should take several steps to increase the local building sector awareness of the Egyptian method to benefit these codes, and it should begin to enforce it within the building permits after a proper guidance and feedback.

  2. Linear least-squares method for global luminescent oil film skin friction field analysis

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  3. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  4. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  5. Risk Assessment Method for Offshore Structure Based on Global Sensitivity Analysis

    Zou Tao

    2012-01-01

    Full Text Available Based on global sensitivity analysis (GSA, this paper proposes a new risk assessment method for an offshore structure design. This method quantifies all the significances among random variables and their parameters at first. And by comparing the degree of importance, all minor factors would be negligible. Then, the global uncertainty analysis work would be simplified. Global uncertainty analysis (GUA is an effective way to study the complexity and randomness of natural events. Since field measured data and statistical results often have inevitable errors and uncertainties which lead to inaccurate prediction and analysis, the risk in the design stage of offshore structures caused by uncertainties in environmental loads, sea level, and marine corrosion must be taken into account. In this paper, the multivariate compound extreme value distribution model (MCEVD is applied to predict the extreme sea state of wave, current, and wind. The maximum structural stress and deformation of a Jacket platform are analyzed and compared with different design standards. The calculation result sufficiently demonstrates the new risk assessment method’s rationality and security.

  6. Comparison of methods for quantification of global DNA methylation in human cells and tissues.

    Sofia Lisanti

    Full Text Available DNA methylation is a key epigenetic modification which, in mammals, occurs mainly at CpG dinucleotides. Most of the CpG methylation in the genome is found in repetitive regions, rich in dormant transposons and endogenous retroviruses. Global DNA hypomethylation, which is a common feature of several conditions such as ageing and cancer, can cause the undesirable activation of dormant repeat elements and lead to altered expression of associated genes. DNA hypomethylation can cause genomic instability and may contribute to mutations and chromosomal recombinations. Various approaches for quantification of global DNA methylation are widely used. Several of these approaches measure a surrogate for total genomic methyl cytosine and there is uncertainty about the comparability of these methods. Here we have applied 3 different approaches (luminometric methylation assay, pyrosequencing of the methylation status of the Alu repeat element and of the LINE1 repeat element for estimating global DNA methylation in the same human cell and tissue samples and have compared these estimates with the "gold standard" of methyl cytosine quantification by HPLC. Next to HPLC, the LINE1 approach shows the smallest variation between samples, followed by Alu. Pearson correlations and Bland-Altman analyses confirmed that global DNA methylation estimates obtained via the LINE1 approach corresponded best with HPLC-based measurements. Although, we did not find compelling evidence that the gold standard measurement by HPLC could be substituted with confidence by any of the surrogate assays for detecting global DNA methylation investigated here, the LINE1 assay seems likely to be an acceptable surrogate in many cases.

  7. Global chaos synchronization of three coupled nonlinear autonomous systems and a novel method of chaos encryption

    An Xinlei; Yu Jianning; Chu Yandong; Zhang Jiangang; Zhang Li

    2009-01-01

    In this paper, we discussed the fixed points and their linear stability of a new nonlinear autonomous system that introduced by J.C. Sprott. Based on Lyapunov stabilization theorem, a global chaos synchronization scheme of three coupled identical systems is investigated. By choosing proper coupling parameters, the states of all the three systems can be synchronized. Then this method was applied to secure communication through chaotic masking, used three coupled identical systems, propose a novel method of chaos encryption, after encrypting in the previous two transmitters, information signal can be recovered exactly at the receiver end. Simulation results show that the method can realize monotonous synchronization. Further more, the information signal can be recovered undistorted when applying this method to secure communication.

  8. The Global Benchmarking as a Method of Countering the Intellectual Migration in Ukraine

    Striy Lуbov A.

    2017-05-01

    Full Text Available The publication is aimed at studying the global benchmarking as a method of countering the intellectual migration in Ukraine. The article explores the intellectual process of migration in Ukraine; the current status of the country in the light of crisis and all the problems that arose has been analyzed; statistical data on the migration process are provided, the method of countering it has been determined; types of benchmarking have been considered; the benchmarking method as a way of achieving objective has been analyzed; the benefits to be derived from this method have been determined, as well as «bottlenecks» in the State process of regulating migratory flows, not only to call attention to, but also take corrective actions.

  9. Two Modified Three-Term Type Conjugate Gradient Methods and Their Global Convergence for Unconstrained Optimization

    Zhongbo Sun

    2014-01-01

    Full Text Available Two modified three-term type conjugate gradient algorithms which satisfy both the descent condition and the Dai-Liao type conjugacy condition are presented for unconstrained optimization. The first algorithm is a modification of the Hager and Zhang type algorithm in such a way that the search direction is descent and satisfies Dai-Liao’s type conjugacy condition. The second simple three-term type conjugate gradient method can generate sufficient decent directions at every iteration; moreover, this property is independent of the steplength line search. Also, the algorithms could be considered as a modification of the MBFGS method, but with different zk. Under some mild conditions, the given methods are global convergence, which is independent of the Wolfe line search for general functions. The numerical experiments show that the proposed methods are very robust and efficient.

  10. Statistical analysis of global surface temperature and sea level using cointegration methods

    Schmidt, Torben; Johansen, Søren; Thejll, Peter

    2012-01-01

    Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to the lack of representation of ice-sheet dynamics in present-day physically-based climate models being unable to simulate observed sea level trends......, semi-empirical models have been applied as an alternative for projecting of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and land-ocean surface air...... temperature, capable of handling such peculiarities. We find a relationship between sea level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s...

  11. Statistical analysis of global surface air temperature and sea level using cointegration methods

    Schmith, Torben; Johansen, Søren; Thejll, Peter

    Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to physically-based models being unable to simulate observed sea level trends, semi-empirical models have been applied as an alternative for projecting...... of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and surface air temperature, capable of handling such peculiarities. We find a relationship between sea...... level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s is exceptional in the sense that sea level and warming deviates from the expected...

  12. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  13. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  14. A global optimization method for evaporative cooling systems based on the entransy theory

    Yuan, Fang; Chen, Qun

    2012-01-01

    Evaporative cooling technique, one of the most widely used methods, is essential to both energy conservation and environment protection. This contribution introduces a global optimization method for indirect evaporative cooling systems with coupled heat and mass transfer processes based on the entransy theory to improve their energy efficiency. First, we classify the irreversible processes in the system into the heat transfer process, the coupled heat and mass transfer process and the mixing process of waters in different branches, where the irreversibility is evaluated by the entransy dissipation. Then through the total system entransy dissipation, we establish the theoretical relationship of the user demands with both the geometrical structures of each heat exchanger and the operating parameters of each fluid, and derive two optimization equation groups focusing on two typical optimization problems. Finally, an indirect evaporative cooling system is taken as an example to illustrate the applications of the newly proposed optimization method. It is concluded that there exists an optimal circulating water flow rate with the minimum total thermal conductance of the system. Furthermore, with different user demands and moist air inlet conditions, it is the global optimization, other than parametric analysis, will obtain the optimal performance of the system. -- Highlights: ► Introduce a global optimization method for evaporative cooling systems. ► Establish the direct relation between user demands and the design parameters. ► Obtain two groups of optimization equations for two typical optimization objectives. ► Solving the equations offers the optimal design parameters for the system. ► Provide the instruction for the design of coupled heat and mass transfer systems.

  15. Thermal performance monitoring and optimisation

    Sunde, Svein; Berg; Oeyvind

    1998-01-01

    Monitoring of the thermal efficiency of nuclear power plants is expected to become increasingly important as energy-market liberalisation exposes plants to increasing availability requirements and fiercer competition. The general goal in thermal performance monitoring is straightforward: to maximise the ratio of profit to cost under the constraints of safe operation. One may perceive this goal to be pursued in two ways, one oriented towards fault detection and cost-optimal predictive maintenance, and another determined at optimising target values of parameters in response to any component degradation detected, changes in ambient conditions, or the like. Annual savings associated with effective thermal-performance monitoring are expected to be in the order of $ 100 000 for power plants of representative size. A literature review shows that a number of computer systems for thermal-performance monitoring exists, either as prototypes or commercially available. The characteristics and needs of power plants may vary widely, however, and decisions concerning the exact scope, content and configuration of a thermal-performance monitor may well follow a heuristic approach. Furthermore, re-use of existing software modules may be desirable. Therefore, we suggest here the design of a flexible workbench for easy assembly of an experimental thermal-performance monitor at the Halden Project. The suggested design draws heavily on our extended experience in implementing control-room systems featured by assets like high levels of customisation, flexibility in configuration and modularity in structure, and on a number of relevant adjoining activities. The design includes a multi-computer communication system and a graphical user's interface, and aims at a system adaptable to any combination of in-house or end user's modules, as well as commercially available software. (author)

  16. From individual innovation to global impact: the Global Cooperation on Assistive Technology (GATE) innovation snapshot as a method for sharing and scaling.

    Layton, Natasha; Murphy, Caitlin; Bell, Diane

    2018-05-09

    Assistive technology (AT) is an essential facilitator of independence and participation, both for people living with the effects of disability and/or non-communicable disease, as well as people aging with resultant functional decline. The World Health Organization (WHO) recognizes the substantial gap between the need for and provision of AT and is leading change through the Global Cooperation on Assistive Technology (GATE) initiative. Showcasing innovations gathered from 92 global researchers, innovators, users and educators of AT through the WHO GREAT Summit, this article provides an analysis of ideas and actions on a range of dimensions in order to provide a global overview of AT innovation. The accessible method used to capture and showcase this data is presented and critiqued, concluding that "innovation snapshots" are a rapid and concise strategy to capture and showcase AT innovation and to foster global collaboration. Implications for Rehabilitation Focal tools such as ePosters with uniform data requirements enable the rapid sharing of information. A diversity of innovative practices are occurring globally in the areas of AT Products, Policy, Provision, People and Personnel. The method offered for Innovation Snapshots had substantial uptake and is a feasible means to capture data across a range of stakeholders. Meeting accessibility criteria is an emerging competency in the AT community. Substantial areas of common interest exist across regions and globally in the AT community, demonstrating the effectiveness of information sharing platforms such as GATE and supporting the idea of regional forums and networks.

  17. A New Filled Function Method with One Parameter for Global Optimization

    Fei Wei

    2013-01-01

    Full Text Available The filled function method is an effective approach to find the global minimizer of multidimensional multimodal functions. The conventional filled functions are numerically unstable due to exponential or logarithmic term and sensitive to parameters. In this paper, a new filled function with only one parameter is proposed, which is continuously differentiable and proved to satisfy all conditions of the filled function definition. Moreover, this filled function is not sensitive to parameter, and the overflow can not happen for this function. Based on these, a new filled function method is proposed, and it is numerically stable to the initial point and the parameter variable. The computer simulations indicate that the proposed filled function method is efficient and effective.

  18. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  19. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  20. Mixed multiscale finite element methods using approximate global information based on partial upscaling

    Jiang, Lijian; Efendiev, Yalchin; Mishev, IIya

    2009-01-01

    The use of limited global information in multiscale simulations is needed when there is no scale separation. Previous approaches entail fine-scale simulations in the computation of the global information. The computation of the global information

  1. Creating a spatially-explicit index: a method for assessing the global wildfire-water risk

    Robinne, François-Nicolas; Parisien, Marc-André; Flannigan, Mike; Miller, Carol; Bladon, Kevin D.

    2017-04-01

    The wildfire-water risk (WWR) has been defined as the potential for wildfires to adversely affect water resources that are important for downstream ecosystems and human water needs for adequate water quantity and quality, therefore compromising the security of their water supply. While tools and methods are numerous for watershed-scale risk analysis, the development of a toolbox for the large-scale evaluation of the wildfire risk to water security has only started recently. In order to provide managers and policy-makers with an adequate tool, we implemented a method for the spatial analysis of the global WWR based on the Driving forces-Pressures-States-Impacts-Responses (DPSIR) framework. This framework relies on the cause-and-effect relationships existing between the five categories of the DPSIR chain. As this approach heavily relies on data, we gathered an extensive set of spatial indicators relevant to fire-induced hydrological hazards and water consumption patterns by human and natural communities. When appropriate, we applied a hydrological routing function to our indicators in order to simulate downstream accumulation of potentially harmful material. Each indicator was then assigned a DPSIR category. We collapsed the information in each category using a principal component analysis in order to extract the most relevant pixel-based information provided by each spatial indicator. Finally, we compiled our five categories using an additive indexation process to produce a spatially-explicit index of the WWR. A thorough sensitivity analysis has been performed in order to understand the relationship between the final risk values and the spatial pattern of each category used during the indexation. For comparison purposes, we aggregated index scores by global hydrological regions, or hydrobelts, to get a sense of regional DPSIR specificities. This rather simple method does not necessitate the use of complex physical models and provides a scalable and efficient tool

  2. Landslide susceptibility mapping on a global scale using the method of logistic regression

    L. Lin

    2017-08-01

    Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.

  3. Another method for a global fit of the Cabibbo-Kobayashi-Maskawa matrix

    Dita, Petre

    2005-01-01

    Recently we proposed a novel method for doing global fits on the entries of the Cabibbo-Kobayashi-Maskawa matrix. The new used ingredients were a clear relationship between the entries of the CKM matrix and the experimental data, as well as the use of the necessary and sufficient condition the data have to satisfy in order to find a unitary matrix compatible with them. This condition writes as -1 ≤ cosδ ≤1 where δ is the phase that accounts for CP violation. Numerical results are provided for the CKM matrix entries, the mixing angles between generations and all the angles of the standard unitarity triangle. (author)

  4. A global carbon assimilation system based on a dual optimization method

    Zheng, H.; Li, Y.; Chen, J. M.; Wang, T.; Huang, Q.; Huang, W. X.; Wang, L. H.; Li, S. M.; Yuan, W. P.; Zheng, X.; Zhang, S. P.; Chen, Z. Q.; Jiang, F.

    2015-02-01

    Ecological models are effective tools for simulating the distribution of global carbon sources and sinks. However, these models often suffer from substantial biases due to inaccurate simulations of complex ecological processes. We introduce a set of scaling factors (parameters) to an ecological model on the basis of plant functional type (PFT) and latitudes. A global carbon assimilation system (GCAS-DOM) is developed by employing a dual optimization method (DOM) to invert the time-dependent ecological model parameter state and the net carbon flux state simultaneously. We use GCAS-DOM to estimate the global distribution of the CO2 flux on 1° × 1° grid cells for the period from 2001 to 2007. Results show that land and ocean absorb -3.63 ± 0.50 and -1.82 ± 0.16 Pg C yr-1, respectively. North America, Europe and China contribute -0.98 ± 0.15, -0.42 ± 0.08 and -0.20 ± 0.29 Pg C yr-1, respectively. The uncertainties in the flux after optimization by GCAS-DOM have been remarkably reduced by more than 60%. Through parameter optimization, GCAS-DOM can provide improved estimates of the carbon flux for each PFT. Coniferous forest (-0.97 ± 0.27 Pg C yr-1) is the largest contributor to the global carbon sink. Fluxes of once-dominant deciduous forest generated by the Boreal Ecosystems Productivity Simulator (BEPS) are reduced to -0.78 ± 0.23 Pg C yr-1, the third largest carbon sink.

  5. Module detection in complex networks using integer optimisation

    Tsoka Sophia

    2010-11-01

    Full Text Available Abstract Background The detection of modules or community structure is widely used to reveal the underlying properties of complex networks in biology, as well as physical and social sciences. Since the adoption of modularity as a measure of network topological properties, several methodologies for the discovery of community structure based on modularity maximisation have been developed. However, satisfactory partitions of large graphs with modest computational resources are particularly challenging due to the NP-hard nature of the related optimisation problem. Furthermore, it has been suggested that optimising the modularity metric can reach a resolution limit whereby the algorithm fails to detect smaller communities than a specific size in large networks. Results We present a novel solution approach to identify community structure in large complex networks and address resolution limitations in module detection. The proposed algorithm employs modularity to express network community structure and it is based on mixed integer optimisation models. The solution procedure is extended through an iterative procedure to diminish effects that tend to agglomerate smaller modules (resolution limitations. Conclusions A comprehensive comparative analysis of methodologies for module detection based on modularity maximisation shows that our approach outperforms previously reported methods. Furthermore, in contrast to previous reports, we propose a strategy to handle resolution limitations in modularity maximisation. Overall, we illustrate ways to improve existing methodologies for community structure identification so as to increase its efficiency and applicability.

  6. Statistical optimisation techniques in fatigue signal editing problem

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-01-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection

  7. Statistical optimisation techniques in fatigue signal editing problem

    Nopiah, Z. M.; Osman, M. H. [Fundamental Engineering Studies Unit Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia); Baharin, N.; Abdullah, S. [Department of Mechanical and Materials Engineering Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia)

    2015-02-03

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  8. Optimisation of battery operating life considering software tasks and their timing behaviour

    Lipskoch, Henrik

    2010-02-19

    optimisation variable, either global, for all tasks, or local, individually per task. It is shown that a global slowdown factor can be computed from the linear constraints together with a linear objective in a linear program, whereas local slowdown factors are shown to require a non-linear but still convex objective. Processor shutdown switches off the processor for a specific time; it blocks the processor from processing software. Duration and occurrence become optimisation parameters, and the linearised real-time feasibility test is used to decide whether a parameter setting is feasible or violates deadlines. In this thesis, two shutdown policies are introduced. Periodic shutdown switches off the processor periodically, thus, besides duration, the period is to be optimised. Task-dependent shutdown switches off the processor with an inherited occurrence behaviour from a certain task, thus, besides duration, the parent task needs to be chosen, as well as the number of instances of the parent task to fall between two consecutive shutdowns. Each of the presented methods for saving energy yields a different system configuration, out of which the best with respect to operating life is chosen with the help of a battery model. Discharge profiles serve as a coupling between information in the notion of event streams, task power consumptions, and the battery model. A battery model evaluation with measurements is presented in this thesis. [German] Benutzer mobiler eingebetteter Systeme haben ein Interesse an einer langen Batterienutzungsdauer. Je laenger ein System ohne Nachladen oder Batteriewechsel operieren kann, desto geringer werden Wartungskosten und durch Energiemangel bedingte Ausfaelle. Energiesparmethoden helfen die Nutzungsdauer zu verlaengern, koennen jedoch zu einer Reduzierung der verfuegbaren Rechenzeit fuehren. Die Systeme koennen zeitlichen Anforderungen unterworfen sein, etwa um eine zuverlaessige Kommunikation sicherzustellen. Somit ist der Einsatz von

  9. Optimisation of battery operating life considering software tasks and their timing behaviour

    Lipskoch, Henrik

    2010-02-19

    , either global, for all tasks, or local, individually per task. It is shown that a global slowdown factor can be computed from the linear constraints together with a linear objective in a linear program, whereas local slowdown factors are shown to require a non-linear but still convex objective. Processor shutdown switches off the processor for a specific time; it blocks the processor from processing software. Duration and occurrence become optimisation parameters, and the linearised real-time feasibility test is used to decide whether a parameter setting is feasible or violates deadlines. In this thesis, two shutdown policies are introduced. Periodic shutdown switches off the processor periodically, thus, besides duration, the period is to be optimised. Task-dependent shutdown switches off the processor with an inherited occurrence behaviour from a certain task, thus, besides duration, the parent task needs to be chosen, as well as the number of instances of the parent task to fall between two consecutive shutdowns. Each of the presented methods for saving energy yields a different system configuration, out of which the best with respect to operating life is chosen with the help of a battery model. Discharge profiles serve as a coupling between information in the notion of event streams, task power consumptions, and the battery model. A battery model evaluation with measurements is presented in this thesis. [German] Benutzer mobiler eingebetteter Systeme haben ein Interesse an einer langen Batterienutzungsdauer. Je laenger ein System ohne Nachladen oder Batteriewechsel operieren kann, desto geringer werden Wartungskosten und durch Energiemangel bedingte Ausfaelle. Energiesparmethoden helfen die Nutzungsdauer zu verlaengern, koennen jedoch zu einer Reduzierung der verfuegbaren Rechenzeit fuehren. Die Systeme koennen zeitlichen Anforderungen unterworfen sein, etwa um eine zuverlaessige Kommunikation sicherzustellen. Somit ist der Einsatz von Energiesparmethoden neben der

  10. Shape optimisation and performance analysis of flapping wings

    Ghommem, Mehdi

    2012-09-04

    In this paper, shape optimisation of flapping wings in forward flight is considered. This analysis is performed by combining a local gradient-based optimizer with the unsteady vortex lattice method (UVLM). Although the UVLM applies only to incompressible, inviscid flows where the separation lines are known a priori, Persson et al. [1] showed through a detailed comparison between UVLM and higher-fidelity computational fluid dynamics methods for flapping flight that the UVLM schemes produce accurate results for attached flow cases and even remain trend-relevant in the presence of flow separation. As such, they recommended the use of an aerodynamic model based on UVLM to perform preliminary design studies of flapping wing vehicles Unlike standard computational fluid dynamics schemes, this method requires meshing of the wing surface only and not of the whole flow domain [2]. From the design or optimisation perspective taken in our work, it is fairly common (and sometimes entirely necessary, as a result of the excessive computational cost of the highest fidelity tools such as Navier-Stokes solvers) to rely upon such a moderate level of modelling fidelity to traverse the design space in an economical manner. The objective of the work, described in this paper, is to identify a set of optimised shapes that maximise the propulsive efficiency, defined as the ratio of the propulsive power over the aerodynamic power, under lift, thrust, and area constraints. The shape of the wings is modelled using B-splines, a technology used in the computer-aided design (CAD) field for decades. This basis can be used to smoothly discretize wing shapes with few degrees of freedom, referred to as control points. The locations of the control points constitute the design variables. The results suggest that changing the shape yields significant improvement in the performance of the flapping wings. The optimisation pushes the design to "bird-like" shapes with substantial increase in the time

  11. Process and Economic Optimisation of a Milk Processing Plant with Solar Thermal Energy

    Bühler, Fabian; Nguyen, Tuong-Van; Elmegaard, Brian

    2016-01-01

    . Based on the case study of a dairy factory, where first a heat integration is performed to optimise the system, a model for solar thermal process integration is developed. The detailed model is based on annual hourly global direct and diffuse solar radiation, from which the radiation on a defined......This work investigates the integration of solar thermal systems for process energy use. A shift from fossil fuels to renewable energy could be beneficial both from environmental and economic perspectives, after the process itself has been optimised and efficiency measures have been implemented...... surface is calculated. Based on hourly process stream data from the dairy factory, the optimal streams for solar thermal process integration are found, with an optimal thermal storagetank volume. The last step consists of an economic optimisation of the problem to determine the optimal size...

  12. GOSSIP: a method for fast and accurate global alignment of protein structures.

    Kifer, I; Nussinov, R; Wolfson, H J

    2011-04-01

    The database of known protein structures (PDB) is increasing rapidly. This results in a growing need for methods that can cope with the vast amount of structural data. To analyze the accumulating data, it is important to have a fast tool for identifying similar structures and clustering them by structural resemblance. Several excellent tools have been developed for the comparison of protein structures. These usually address the task of local structure alignment, an important yet computationally intensive problem due to its complexity. It is difficult to use such tools for comparing a large number of structures to each other at a reasonable time. Here we present GOSSIP, a novel method for a global all-against-all alignment of any set of protein structures. The method detects similarities between structures down to a certain cutoff (a parameter of the program), hence allowing it to detect similar structures at a much higher speed than local structure alignment methods. GOSSIP compares many structures in times which are several orders of magnitude faster than well-known available structure alignment servers, and it is also faster than a database scanning method. We evaluate GOSSIP both on a dataset of short structural fragments and on two large sequence-diverse structural benchmarks. Our conclusions are that for a threshold of 0.6 and above, the speed of GOSSIP is obtained with no compromise of the accuracy of the alignments or of the number of detected global similarities. A server, as well as an executable for download, are available at http://bioinfo3d.cs.tau.ac.il/gossip/.

  13. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  14. Optimising India's small hydro resources

    Kumar, A.

    1995-01-01

    A brief history is given of an initiation to develop small scale hydropower projects in the Himalayas. The experience of the Indian project managers in utilising international funds from the Global Environment Facility could serve as a model for other small remote communities in the rest of the world. Lessons learned are reported. (UK)

  15. Optimisation of Investment Resources at Small Enterprises

    Shvets Iryna B.

    2014-03-01

    Full Text Available The goal of the article lies in the study of the process of optimisation of the structure of investment resources, development of criteria and stages of optimisation of volumes of investment resources for small enterprises by types of economic activity. The article characterises the process of transformation of investment resources into assets and liabilities of the balances of small enterprises and conducts calculation of the structure of sources of formation of investment resources in Ukraine at small enterprises by types of economic activity in 2011. On the basis of the conducted analysis of the structure of investment resources of small enterprises the article forms main groups of criteria of optimisation in the context of individual small enterprises by types of economic activity. The article offers an algorithm and step-by-step scheme of optimisation of investment resources at small enterprises in the form of a multi-stage process of management of investment resources in the context of increase of their mobility and rate of transformation of existing resources into investments. The prospect of further studies in this direction is development of a structural and logic scheme of optimisation of volumes of investment resources at small enterprises.

  16. Pre-segmented 2-Step IMRT with subsequent direct machine parameter optimisation – a planning study

    Bratengeier, Klaus; Meyer, Jürgen; Flentje, Michael

    2008-01-01

    Modern intensity modulated radiotherapy (IMRT) mostly uses iterative optimisation methods. The integration of machine parameters into the optimisation process of step and shoot leaf positions has been shown to be successful. For IMRT segmentation algorithms based on the analysis of the geometrical structure of the planning target volumes (PTV) and the organs at risk (OAR), the potential of such procedures has not yet been fully explored. In this work, 2-Step IMRT was combined with subsequent direct machine parameter optimisation (DMPO-Raysearch Laboratories, Sweden) to investigate this potential. In a planning study DMPO on a commercial planning system was compared with manual primary 2-Step IMRT segment generation followed by DMPO optimisation. 15 clinical cases and the ESTRO Quasimodo phantom were employed. Both the same number of optimisation steps and the same set of objective values were used. The plans were compared with a clinical DMPO reference plan and a traditional IMRT plan based on fluence optimisation and consequent segmentation. The composite objective value (the weighted sum of quadratic deviations of the objective values and the related points in the dose volume histogram) was used as a measure for the plan quality. Additionally, a more extended set of parameters was used for the breast cases to compare the plans. The plans with segments pre-defined with 2-Step IMRT were slightly superior to DMPO alone in the majority of cases. The composite objective value tended to be even lower for a smaller number of segments. The total number of monitor units was slightly higher than for the DMPO-plans. Traditional IMRT fluence optimisation with subsequent segmentation could not compete. 2-Step IMRT segmentation is suitable as starting point for further DMPO optimisation and, in general, results in less complex plans which are equal or superior to plans generated by DMPO alone

  17. Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor

    D'Auvergne, Edward J.; Gooley, Paul R.

    2008-01-01

    Finding the dynamics of an entire macromolecule is a complex problem as the model-free parameter values are intricately linked to the Brownian rotational diffusion of the molecule, mathematically through the autocorrelation function of the motion and statistically through model selection. The solution to this problem was formulated using set theory as an element of the universal set U-the union of all model-free spaces (d'Auvergne EJ and Gooley PR (2007) Mol BioSyst 3(7), 483-494). The current procedure commonly used to find the universal solution is to initially estimate the diffusion tensor parameters, to optimise the model-free parameters of numerous models, and then to choose the best model via model selection. The global model is then optimised and the procedure repeated until convergence. In this paper a new methodology is presented which takes a different approach to this diffusion seeded model-free paradigm. Rather than starting with the diffusion tensor this iterative protocol begins by optimising the model-free parameters in the absence of any global model parameters, selecting between all the model-free models, and finally optimising the diffusion tensor. The new model-free optimisation protocol will be validated using synthetic data from Schurr JM et al. (1994) J Magn Reson B 105(3), 211-224 and the relaxation data of the bacteriorhodopsin (1-36)BR fragment from Orekhov VY (1999) J Biomol NMR 14(4), 345-356. To demonstrate the importance of this new procedure the NMR relaxation data of the Olfactory Marker Protein (OMP) of Gitti R et al. (2005) Biochem 44(28), 9673-9679 is reanalysed. The result is that the dynamics for certain secondary structural elements is very different from those originally reported

  18. Effective methods of protection of the intellectual activity results in infosphere of global telematics networks

    D. A. Lovtsov

    2016-01-01

    Full Text Available The purpose of this article is perfection of using metodology of technological and organization and legal protect of intellectual activity results and related intellectual rights in information sphere of Global Telematics Networks (such as of «Internet», «Relkom», «Sitek», «Sedab», «Remart», and others. On the conduct analysis base of the peculiarities and possibilities of using of different technological, organization and legal methods and ways protection of information objects the offers of perfection of corresponding organization and legal safeguarding are formulated. The effectiveness of the protection is provided on the basis of rational aggregation technological, organization and legal methods and ways possible in a particular situation.

  19. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  20. Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

    Liu, J.; Lan, T.; Qin, H.

    2017-01-01

    Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.

  1. Effects of efforts to optimise morbidity and mortality rounds to serve contemporary quality improvement and educational goals: a systematic review.

    Smaggus, Andrew; Mrkobrada, Marko; Marson, Alanna; Appleton, Andrew

    2018-01-01

    The quality and safety movement has reinvigorated interest in optimising morbidity and mortality (M&M) rounds. We performed a systematic review to identify effective means of updating M&M rounds to (1) identify and address quality and safety issues, and (2) address contemporary educational goals. Relevant databases (Medline, Embase, PubMed, Education Resource Information Centre, Cumulative Index to Nursing and Allied Health Literature, Healthstar, and Global Health) were searched to identify primary sources. Studies were included if they (1) investigated an intervention applied to M&M rounds, (2) reported outcomes relevant to the identification of quality and safety issues, or educational outcomes relevant to quality improvement (QI), patient safety or general medical education and (3) included a control group. Study quality was assessed using the Medical Education Research Study Quality Instrument and Newcastle-Ottawa Scale-Education instruments. Given the heterogeneity of interventions and outcome measures, results were analysed thematically. The final analysis included 19 studies. We identified multiple effective strategies (updating objectives, standardising elements of rounds and attaching rounds to a formal quality committee) to optimise M&M rounds for a QI/safety purpose. These efforts were associated with successful integration of quality and safety content into rounds, and increased implementation of QI interventions. Consistent effects on educational outcomes were difficult to identify, likely due to the use of methodologies ill-fitted for educational research. These results are encouraging for those seeking to optimise the quality and safety mission of M&M rounds. However, the inability to identify consistent educational effects suggests the investigation of M&M rounds could benefit from additional methodologies (qualitative, mixed methods) in order to understand the complex mechanisms driving learning at M&M rounds. © Article author(s) (or their

  2. Multiobjective optimisation of energy systems and building envelope retrofit in a residential community

    Wu, Raphael; Mavromatidis, Georgios; Orehounig, Kristina; Carmeliet, Jan

    2017-01-01

    Highlights: • Simultaneous optimisation of building envelope retrofit and energy systems. • Retrofit and energy systems change interact and should be considered simultaneously. • Case study quantifies cost-GHG emission tradeoffs for different retrofit options. - Abstract: In this paper, a method for a multi-objective and simultaneous optimisation of building energy systems and retrofit is presented. Tailored to be suitable for the diverse range of existing buildings in terms of age, size, and use, it combines dynamic energy demand simulation to explore individual retrofit scenarios with an energy hub optimisation. Implemented as an epsilon-constrained mixed integer linear program (MILP), the optimisation matches envelope retrofit with renewable and high efficiency energy supply technologies such as biomass boilers, heat pumps, photovoltaic and solar thermal panels to minimise life cycle cost and greenhouse gas (GHG) emissions. Due to its multi-objective, integrated assessment of building transformation options and its ability to capture both individual building characteristics and trends within a neighbourhood, this method is aimed to provide developers, neighbourhood and town policy makers with the necessary information to make adequate decisions. Our method is deployed in a case study of typical residential buildings in the Swiss village of Zernez, simulating energy demands in EnergyPlus and solving the optimisation problem with CPLEX. Although common trade-offs in energy system and retrofit choice can be observed, optimisation results suggest that the diversity in building age and size leads to optimal strategies for retrofitting and building system solutions, which are specific to different categories. With this method, GHG emissions of the entire community can be reduced by up to 76% at a cost increase of 3% compared to the current emission levels, if an optimised solution is selected for each building category.

  3. Measuring global oil trade dependencies: An application of the point-wise mutual information method

    Kharrazi, Ali; Fath, Brian D.

    2016-01-01

    Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.

  4. Analysis and optimisation of a mixed fluid cascade (MFC) process

    Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng

    2017-04-01

    A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.

  5. Optimisation of key performance measures in air cargo demand management

    Alexander May; Adrian Anslow; Udechukwu Ojiako; Yue Wu; Alasdair Marshall; Maxwell Chipulu

    2014-01-01

    This article sought to facilitate the optimisation of key performance measures utilised for demand management in air cargo operations. The focus was on the Revenue Management team at Virgin Atlantic Cargo and a fuzzy group decision-making method was used. Utilising intelligent fuzzy multi-criteria methods, the authors generated a ranking order of ten key outcome-based performance indicators for Virgin Atlantic air cargo Revenue Management. The result of this industry-driven study showed that ...

  6. Optimisation of trawl energy efficiency under fishing effort constraint

    Priour, Daniel; Khaled, Ramez

    2009-01-01

    Trawls energy efficiency is greatly affected by the drag, as well as by the swept area. The drag results in an increase of the energy consumption and the sweeping influences the catch. Methods of optimisation of the trawl design have been developed in order to reduce the volume of carburant per kg of caught fish and consequently the drag per swept area of the trawl. Based on a finite element method model for flexible netting structures, the tool modifies step by step a reference design. For e...

  7. Noise aspects at aerodynamic blade optimisation projects

    Schepers, J.G.

    1997-06-01

    The Netherlands Energy Research Foundation (ECN) has often been involved in industrial projects, in which blade geometries are created automatic by means of numerical optimisation. Usually, these projects aim at the determination of the aerodynamic optimal wind turbine blade, i.e. the goal is to design a blade which is optimal with regard to energy yield. In other cases, blades have been designed which are optimal with regard to cost of generated energy. However, it is obvious that the wind turbine blade designs which result from these optimisations, are not necessarily optimal with regard to noise emission. In this paper an example is shown of an aerodynamic blade optimisation, using the ECN-program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. 11 figs., 8 refs

  8. Topology Optimisation of Wireless Sensor Networks

    Thike Aye Min

    2016-01-01

    Full Text Available Wireless sensor networks are widely used in a variety of fields including industrial environments. In case of a clustered network the location of cluster head affects the reliability of the network operation. Finding of the optimum location of the cluster head, therefore, is critical for the design of a network. This paper discusses the optimisation approach, based on the brute force algorithm, in the context of topology optimisation of a cluster structure centralised wireless sensor network. Two examples are given to verify the approach that demonstrate the implementation of the brute force algorithm to find an optimum location of the cluster head.

  9. Integration of environmental aspects in modelling and optimisation of water supply chains.

    Koleva, Mariya N; Calderón, Andrés J; Zhang, Di; Styan, Craig A; Papageorgiou, Lazaros G

    2018-04-26

    Climate change becomes increasingly more relevant in the context of water systems planning. Tools are necessary to provide the most economic investment option considering the reliability of the infrastructure from technical and environmental perspectives. Accordingly, in this work, an optimisation approach, formulated as a spatially-explicit multi-period Mixed Integer Linear Programming (MILP) model, is proposed for the design of water supply chains at regional and national scales. The optimisation framework encompasses decisions such as installation of new purification plants, capacity expansion, and raw water trading schemes. The objective is to minimise the total cost incurring from capital and operating expenditures. Assessment of available resources for withdrawal is performed based on hydrological balances, governmental rules and sustainable limits. In the light of the increasing importance of reliability of water supply, a second objective, seeking to maximise the reliability of the supply chains, is introduced. The epsilon-constraint method is used as a solution procedure for the multi-objective formulation. Nash bargaining approach is applied to investigate the fair trade-offs between the two objectives and find the Pareto optimality. The models' capability is addressed through a case study based on Australia. The impact of variability in key input parameters is tackled through the implementation of a rigorous global sensitivity analysis (GSA). The findings suggest that variations in water demand can be more disruptive for the water supply chain than scenarios in which rainfalls are reduced. The frameworks can facilitate governmental multi-aspect decision making processes for the adequate and strategic investments of regional water supply infrastructure. Copyright © 2018. Published by Elsevier B.V.

  10. Optimal design of CHP-based microgrids: Multiobjective optimisation and life cycle assessment

    Zhang, Di; Evangelisti, Sara; Lettieri, Paola; Papageorgiou, Lazaros G.

    2015-01-01

    As an alternative to current centralised energy generation systems, microgrids are adopted to provide local energy with lower energy expenses and gas emissions by utilising distributed energy resources (DER). Several micro combined heat and power technologies have been developed recently for applications at domestic scale. The optimal design of DERs within CHP-based microgrids plays an important role in promoting the penetration of microgrid systems. In this work, the optimal design of microgrids with CHP units is addressed by coupling environmental and economic sustainability in a multi-objective optimisation model which integrates the results of a life cycle assessment of the microgrids investigated. The results show that the installation of multiple CHP technologies has a lower cost with higher environmental saving compared with the case when only a single technology is installed in each site, meaning that the microgrid works in a more efficient way when multiple technologies are selected. In general, proton exchange membrane (PEM) fuel cells are chosen as the basic CHP technology for most solutions, which offers lower environmental impacts at low cost. However, internal combustions engines (ICE) and Stirling engines (SE) are preferred if the heat demand is high. - Highlights: • Optimal design of microgrids is addressed by coupling environmental and economic aspects. • An MILP model is formulated based on the ε-constraint method. • The model selects a combination of CHP technologies with different technical characteristics for optimum scenarios. • The global warming potential (GWP) and the acidification potential (AP) are determined. • The output of LCA is used as an input for the optimisation model

  11. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  12. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  13. Optimal correction and design parameter search by modern methods of rigorous global optimization

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  14. Artificial Bee Colony Algorithm Combined with Grenade Explosion Method and Cauchy Operator for Global Optimization

    Jian-Guo Zheng

    2015-01-01

    Full Text Available Artificial bee colony (ABC algorithm is a popular swarm intelligence technique inspired by the intelligent foraging behavior of honey bees. However, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. To improve the performance of ABC, a novel ABC combined with grenade explosion method (GEM and Cauchy operator, namely, ABCGC, is proposed. GEM is embedded in the onlooker bees’ phase to enhance the exploitation ability and accelerate convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees’ phase to help ABCGC escape from local optimum and further enhance its exploration ability. Two sets of well-known benchmark functions are used to validate the better performance of ABCGC. The experiments confirm that ABCGC is significantly superior to ABC and other competitors; particularly it converges to the global optimum faster in most cases. These results suggest that ABCGC usually achieves a good balance between exploitation and exploration and can effectively serve as an alternative for global optimization.

  15. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    Sun Li-Sha; Kang Xiao-Yun; Zhang Qiong; Lin Lan-Xin

    2011-01-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems. (general)

  16. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  17. Radiation protection optimisation techniques and their application in industry

    Lefaure, C

    1997-12-31

    Since the International Commission on Radiation Protection (ICRP) recommendation 60, the optimisation principle appears to be the core of the radiation protection system. In practice applying it, means implementing an approach both predictive and evolutionary - that relies essentially on a prudent and responsible state of mind. the formal expression of this process, called optimization procedure, implies and indispensable tool for its implementation: the system of monetary values for the unit of collective dose. During the last few years, feed ALARA principle means that a global work management approach must be adopted, considering together all factors contributing to radiation dose. In the nuclear field, the ALARA approach appears to be more successful when implemented in the framework of a managerial approach through structure ALARA programmes. Outside the nuclear industry it is necessary to clearly define priorities through generic optimisation studies and ALARA audits. At the international level much efforts remain to be done to expand efficiently the ALARA process to internal exposure as well as to public exposure. (author) 2 graphs, 5 figs., 3 tabs.

  18. Radiation protection optimisation techniques and their application in industry

    Lefaure, C.

    1996-01-01

    Since the International Commission on Radiation Protection (ICRP) recommendation 60, the optimisation principle appears to be the core of the radiation protection system. In practice applying it, means implementing an approach both predictive and evolutionary - that relies essentially on a prudent and responsible state of mind. the formal expression of this process, called optimization procedure, implies and indispensable tool for its implementation: the system of monetary values for the unit of collective dose. During the last few years, feed ALARA principle means that a global work management approach must be adopted, considering together all factors contributing to radiation dose. In the nuclear field, the ALARA approach appears to be more successful when implemented in the framework of a managerial approach through structure ALARA programmes. Outside the nuclear industry it is necessary to clearly define priorities through generic optimisation studies and ALARA audits. At the international level much efforts remain to be done to expand efficiently the ALARA process to internal exposure as well as to public exposure. (author)

  19. Radiation protection optimisation techniques and their application in industry

    Lefaure, C

    1996-12-31

    Since the International Commission on Radiation Protection (ICRP) recommendation 60, the optimisation principle appears to be the core of the radiation protection system. In practice applying it, means implementing an approach both predictive and evolutionary - that relies essentially on a prudent and responsible state of mind. the formal expression of this process, called optimization procedure, implies and indispensable tool for its implementation: the system of monetary values for the unit of collective dose. During the last few years, feed ALARA principle means that a global work management approach must be adopted, considering together all factors contributing to radiation dose. In the nuclear field, the ALARA approach appears to be more successful when implemented in the framework of a managerial approach through structure ALARA programmes. Outside the nuclear industry it is necessary to clearly define priorities through generic optimisation studies and ALARA audits. At the international level much efforts remain to be done to expand efficiently the ALARA process to internal exposure as well as to public exposure. (author) 2 graphs, 5 figs., 3 tabs.

  20. Décomposition-coordination en optimisation déterministe et stochastique

    Carpentier, Pierre

    2017-01-01

    Ce livre considère le traitement de problèmes d'optimisation de grande taille. L'idée est d'éclater le problème d'optimisation global en sous-problèmes plus petits, donc plus faciles à résoudre, chacun impliquant l'un des sous-systèmes (décomposition), mais sans renoncer à obtenir l'optimum global, ce qui nécessite d'utiliser une procédure itérative (coordination). Ce sujet a fait l'objet de plusieurs livres publiés dans les années 70 dans le contexte de l'optimisation déterministe. Nous présentans ici les principes essentiels et méthodes de décomposition-coordination au travers de situations typiques, puis nous proposons un cadre général qui permet de construire des algorithmes corrects et d'étudier leur convergence. Cette théorie est présentée aussi bien dans le contexte de l'optimisation déterministe que stochastique. Ce matériel a été enseigné par les auteurs dans divers cours de 3ème cycle et également mis en œuvre dans de nombreuses applications industrielles. Des exerc...

  1. Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-{\\L} ojasiewicz Functions

    Csiba, Dominik

    2017-09-09

    In this paper we introduce two novel generalizations of the theory for gradient descent type methods in the proximal setting. First, we introduce the proportion function, which we further use to analyze all known (and many new) block-selection rules for block coordinate descent methods under a single framework. This framework includes randomized methods with uniform, non-uniform or even adaptive sampling strategies, as well as deterministic methods with batch, greedy or cyclic selection rules. Second, the theory of strongly-convex optimization was recently generalized to a specific class of non-convex functions satisfying the so-called Polyak-{\\\\L}ojasiewicz condition. To mirror this generalization in the weakly convex case, we introduce the Weak Polyak-{\\\\L}ojasiewicz condition, using which we give global convergence guarantees for a class of non-convex functions previously not considered in theory. Additionally, we establish (necessarily somewhat weaker) convergence guarantees for an even larger class of non-convex functions satisfying a certain smoothness assumption only. By combining the two abovementioned generalizations we recover the state-of-the-art convergence guarantees for a large class of previously known methods and setups as special cases of our general framework. Moreover, our frameworks allows for the derivation of new guarantees for many new combinations of methods and setups, as well as a large class of novel non-convex objectives. The flexibility of our approach offers a lot of potential for future research, as a new block selection procedure will have a convergence guarantee for all objectives considered in our framework, while a new objective analyzed under our approach will have a whole fleet of block selection rules with convergence guarantees readily available.

  2. Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-{\\L} ojasiewicz Functions

    Csiba, Dominik; Richtarik, Peter

    2017-01-01

    In this paper we introduce two novel generalizations of the theory for gradient descent type methods in the proximal setting. First, we introduce the proportion function, which we further use to analyze all known (and many new) block-selection rules for block coordinate descent methods under a single framework. This framework includes randomized methods with uniform, non-uniform or even adaptive sampling strategies, as well as deterministic methods with batch, greedy or cyclic selection rules. Second, the theory of strongly-convex optimization was recently generalized to a specific class of non-convex functions satisfying the so-called Polyak-{\\L}ojasiewicz condition. To mirror this generalization in the weakly convex case, we introduce the Weak Polyak-{\\L}ojasiewicz condition, using which we give global convergence guarantees for a class of non-convex functions previously not considered in theory. Additionally, we establish (necessarily somewhat weaker) convergence guarantees for an even larger class of non-convex functions satisfying a certain smoothness assumption only. By combining the two abovementioned generalizations we recover the state-of-the-art convergence guarantees for a large class of previously known methods and setups as special cases of our general framework. Moreover, our frameworks allows for the derivation of new guarantees for many new combinations of methods and setups, as well as a large class of novel non-convex objectives. The flexibility of our approach offers a lot of potential for future research, as a new block selection procedure will have a convergence guarantee for all objectives considered in our framework, while a new objective analyzed under our approach will have a whole fleet of block selection rules with convergence guarantees readily available.

  3. Application of Surpac and Whittle Software in Open Pit Optimisation ...

    Application of Surpac and Whittle Software in Open Pit Optimisation and Design. ... This paper studies the Surpac and Whittle software and their application in designing an optimised pit. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  4. (MBO) algorithm in multi-reservoir system optimisation

    A comparative study of marriage in honey bees optimisation (MBO) algorithm in ... A practical application of the marriage in honey bees optimisation (MBO) ... to those of other evolutionary algorithms, such as the genetic algorithm (GA), ant ...

  5. Methods and calculations for regional, continental, and global dose assessments from a hypothetical fuel reprocessing facility

    Schubert, J.F.; Kern, C.D.; Cooper, R.E.; Watts, J.R.

    1978-01-01

    The Savannah River Laboratory (SRL) is coordinating an interlaboratory effort to provide, test, and use state-of-the-art methods for calculating the environmental impact to an offsite population from the normal releases of radionuclides during the routine operation of a fuel-reprocessing plant. Results of this effort are the estimated doses to regional, continental, and global populations. Estimates are based upon operation of a hypothetical reprocessing plant at a site in the southeastern United States. The hypothetical plant will reprocess fuel used at a burn rate of 30 megawatts/metric ton and a burnup of 33,000 megawatt days/metric ton. All fuel will have been cooled for at least 365 days. The plant will have a 10 metric ton/day capacity and an assumed 3000 metric ton/year (82 percent online plant operation) output. Lifetime of the plant is assumed to be 40 years

  6. Global optimization methods for the aerodynamic shape design of transonic cascades

    Mengistu, T.; Ghaly, W.

    2003-01-01

    Two global optimization algorithms, namely Genetic Algorithm (GA) and Simulated Annealing (SA), have been applied to the aerodynamic shape optimization of transonic cascades; the objective being the redesign of an existing turbomachine airfoil to improve its performance by minimizing the total pressure loss while satisfying a number of constraints. This is accomplished by modifying the blade camber line; keeping the same blade thickness distribution, mass flow rate and the same flow turning. The objective is calculated based on an Euler solver and the blade camber line is represented with non-uniform rational B-splines (NURBS). The SA and GA methods were first assessed for known test functions and their performance in optimizing the blade shape for minimum loss is then demonstrated on a transonic turbine cascade where it is shown to produce a significant reduction in total pressure loss by eliminating the passage shock. (author)

  7. Anticipated future of Latvia and Russia during a global economic crisis: A mixed methods perspective

    Kolesovs Aleksandrs

    2014-01-01

    Full Text Available This cross-cultural study explored subjective predictors of more positive evaluation of the future of the country during a global socioeconomic crisis. A sequential mixed-method design was chosen for an exploration of students’ expectations in Russia and Latvia as countries contrasting in macro-contextual conditions. In 2009, Study 1 was designed as a thematic analysis of essays on topic “The Future of Latvia/Russia”. The results demonstrated that the future of a country is anticipated by taking into account external influences, the present of the country, and its perceived power and stability. In 2011, Study 2 involved these themes as independent variables in a multiple regression model. The results demonstrated that positive evaluation of the present and higher perceived power of the country are individuallevel predictors of more positive evaluation of its future. Observed concordance of models indicates relatively high importance of subjective view of the country in the changing world.

  8. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  9. Comparison between the boundary layer and global resistivity methods for tearing modes in reversed field configurations

    Santiago, M.A.M.

    1987-01-01

    A review of the problem of growth rate calculations for tearing modes in field reversed Θ-pinches is presented. Its shown that in the several experimental data, the methods used for analysing the plasma with a global finite resistivity has a better quantitative agreement than the boundary layer analysis. A comparative study taking into account the m = 1 resistive kindmode and the m = 2 mode, which is more dangerous for the survey of rotational instabilities of the plasma column is done. It can see that the imaginary component of the eigenfrequency, which determinates the growth rate, has a good agreement with the experimental data and the real component is different from the rotational frequency as it has been measured in some experiments. (author) [pt

  10. Cervical spine motion in manual versus Jackson table turning methods in a cadaveric global instability model.

    DiPaola, Matthew J; DiPaola, Christian P; Conrad, Bryan P; Horodyski, MaryBeth; Del Rossi, Gianluca; Sawers, Andrew; Bloch, David; Rechtine, Glenn R

    2008-06-01

    A study of spine biomechanics in a cadaver model. To quantify motion in multiple axes created by transfer methods from stretcher to operating table in the prone position in a cervical global instability model. Patients with an unstable cervical spine remain at high risk for further secondary injury until their spine is adequately surgically stabilized. Previous studies have revealed that collars have significant, but limited benefit in preventing cervical motion when manually transferring patients. The literature proposes multiple methods of patient transfer, although no one method has been universally adopted. To date, no study has effectively evaluated the relationship between spine motion and various patient transfer methods to an operating room table for prone positioning. A global instability was surgically created at C5-6 in 4 fresh cadavers with no history of spine pathology. All cadavers were tested both with and without a rigid cervical collar in the intact and unstable state. Three headrest permutations were evaluated Mayfield (SM USA Inc), Prone View (Dupaco, Oceanside, CA), and Foam Pillow (OSI, Union City, CA). A trained group of medical staff performed each of 2 transfer methods: the "manual" and the "Jackson table" transfer. The manual technique entailed performing a standard rotation of the supine patient on a stretcher to the prone position on the operating room table with in-line manual cervical stabilization. The "Jackson" technique involved sliding the supine patient to the Jackson table (OSI, Union City, CA) with manual in-line cervical stabilization, securing them to the table, then initiating the table's lock and turn mechanism and rotating them into a prone position. An electromagnetic tracking device captured angular motion between the C5 and C6 vertebral segments. Repeated measures statistical analysis was performed to evaluate the following conditions: collar use (2 levels), headrest (3 levels), and turning technique (2 levels). For all

  11. Optimisation of material discrimination using spectral CT

    Nik, S.J.; Meyer, J.; Watts, R.

    2010-01-01

    Full text: Spectral computed tomography (CT) using novel X-ray photon counting detectors (PCDs) with energy resolving capabilities is capable of providing energy-selective images. This extra energy information may allow materials such as iodine and calcium, or water and fat to be distinguished. PCDs have energy thresholds, enabling the classification of photons into multiple energy bins. The inform tion content of spectral CT images depends on how the photons are grouped together. [n this work, a method is presented to optimise energy windows for maximum material discrimination. Given a combination of thicknesses, the reference number of expected photons in each energy bin is computed using the Bee Lambert equation. A similar calculation is performed for an exhaustive range of thicknesses and the number of photons in each case is com pared to the reference, allowing a statistical map of the uncertainty in thickness parameters to be constructed. The 63%-confidence region in the two-dimensional thickness space is a representation of how optimal the bins are for material separation. The model is demonstrated with 0.1 mm of iodine and 2.2 mm of calcium using two adjacent bins encompassing the entire energy range. Bins bordering at the iodine k-edge of 33.2 keY are found to be optimal. When compared to two abutted energy bins with equal incident counts as used in the literature (bordering at 54 keY), the thickness uncertainties are reduced from approximately 4% to less than I % (see Figure). This approach has been developed for two materials and is expandable to an arbitrary number of materials and bins.

  12. Auto-optimisation for three-dimensional conformal radiotherapy of nasopharyngeal carcinoma

    Wu, V.W.C.; Kwong, D.W.L.; Sham, J.S.T.; Mui, A.W.L.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the application of auto-optimisation in the treatment planning of three-dimensional conformal radiotherapy (3DCRT) of nasopharyngeal carcinoma (NPC). Methods: Twenty-nine NPC patients were planned by both forward planning and auto-optimisation methods. The forward plans, which consisted of three coplanar facial fields, were produced according to the routine planning criteria. The auto-optimised plans, which consisted of 5-15 (median 9) fields, were generated by the planning system after prescribing the dose requirements and the importance weightings of the planning target volume and organs at risk. Plans produced by the two planning methods were compared by the dose volume histogram, tumour control probability (TCP), conformity index and normal tissue complication probability (NTCP). Results: The auto-optimised plans reduced the average planner's time by over 35 min. It demonstrated better TCP and conformity index than the forward plans (P=0.03 and 0.04, respectively). Besides, the parotid gland and temporo-mandibular (TM) joint were better spared with the mean dose reduction of 31.8 and 17.7%, respectively. The slight trade off was the mild dose increase in spinal cord and brain stem with their maximum doses remaining within the tolerance limits. Conclusions: The findings demonstrated the potentials of auto-optimisation for improving target dose and parotid sparing in the 3DCRT of NPC with saving of the planner's time

  13. Auto-optimisation for three-dimensional conformal radiotherapy of nasopharyngeal carcinoma

    Wu, V.W.C. E-mail: orvinwu@polyu.edu.hk; Kwong, D.W.L.; Sham, J.S.T.; Mui, A.W.L

    2003-08-01

    Purpose: The purpose of this study was to evaluate the application of auto-optimisation in the treatment planning of three-dimensional conformal radiotherapy (3DCRT) of nasopharyngeal carcinoma (NPC). Methods: Twenty-nine NPC patients were planned by both forward planning and auto-optimisation methods. The forward plans, which consisted of three coplanar facial fields, were produced according to the routine planning criteria. The auto-optimised plans, which consisted of 5-15 (median 9) fields, were generated by the planning system after prescribing the dose requirements and the importance weightings of the planning target volume and organs at risk. Plans produced by the two planning methods were compared by the dose volume histogram, tumour control probability (TCP), conformity index and normal tissue complication probability (NTCP). Results: The auto-optimised plans reduced the average planner's time by over 35 min. It demonstrated better TCP and conformity index than the forward plans (P=0.03 and 0.04, respectively). Besides, the parotid gland and temporo-mandibular (TM) joint were better spared with the mean dose reduction of 31.8 and 17.7%, respectively. The slight trade off was the mild dose increase in spinal cord and brain stem with their maximum doses remaining within the tolerance limits. Conclusions: The findings demonstrated the potentials of auto-optimisation for improving target dose and parotid sparing in the 3DCRT of NPC with saving of the planner's time.

  14. Global retrieval of soil moisture and vegetation properties using data-driven methods

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Kerr, Yann

    2017-04-01

    Data-driven methods such as neural networks (NNs) are a powerful tool to retrieve soil moisture from multi-wavelength remote sensing observations at global scale. In this presentation we will review a number of recent results regarding the retrieval of soil moisture with the Soil Moisture and Ocean Salinity (SMOS) satellite, either using SMOS brightness temperatures as input data for the retrieval or using SMOS soil moisture retrievals as reference dataset for the training. The presentation will discuss several possibilities for both the input datasets and the datasets to be used as reference for the supervised learning phase. Regarding the input datasets, it will be shown that NNs take advantage of the synergy of SMOS data and data from other sensors such as the Advanced Scatterometer (ASCAT, active microwaves) and MODIS (visible and infra red). NNs have also been successfully used to construct long time series of soil moisture from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) and SMOS. A NN with input data from ASMR-E observations and SMOS soil moisture as reference for the training was used to construct a dataset sharing a similar climatology and without a significant bias with respect to SMOS soil moisture. Regarding the reference data to train the data-driven retrievals, we will show different possibilities depending on the application. Using actual in situ measurements is challenging at global scale due to the scarce distribution of sensors. In contrast, in situ measurements have been successfully used to retrieve SM at continental scale in North America, where the density of in situ measurement stations is high. Using global land surface models to train the NN constitute an interesting alternative to implement new remote sensing surface datasets. In addition, these datasets can be used to perform data assimilation into the model used as reference for the training. This approach has recently been tested at the European Centre

  15. Setting Priorities in Global Child Health Research Investments: Guidelines for Implementation of the CHNRI Method

    Rudan, Igor; Gibson, Jennifer L.; Ameratunga, Shanthi; El Arifeen, Shams; Bhutta, Zulfiqar A.; Black, Maureen; Black, Robert E.; Brown, Kenneth H.; Campbell, Harry; Carneiro, Ilona; Chan, Kit Yee; Chandramohan, Daniel; Chopra, Mickey; Cousens, Simon; Darmstadt, Gary L.; Gardner, Julie Meeks; Hess, Sonja Y.; Hyder, Adnan A.; Kapiriri, Lydia; Kosek, Margaret; Lanata, Claudio F.; Lansang, Mary Ann; Lawn, Joy; Tomlinson, Mark; Tsai, Alexander C.; Webster, Jayne

    2008-01-01

    This article provides detailed guidelines for the implementation of systematic method for setting priorities in health research investments that was recently developed by Child Health and Nutrition Research Initiative (CHNRI). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The process has the following steps: (i) selecting the managers of the process; (ii) specifying the context and risk management preferences; (iii) discussing criteria for setting health research priorities; (iv) choosing a limited set of the most useful and important criteria; (v) developing means to assess the likelihood that proposed health research options will satisfy the selected criteria; (vi) systematic listing of a large number of proposed health research options; (vii) pre-scoring check of all competing health research options; (viii) scoring of health research options using the chosen set of criteria; (ix) calculating intermediate scores for each health research option; (x) obtaining further input from the stakeholders; (xi) adjusting intermediate scores taking into account the values of stakeholders; (xii) calculating overall priority scores and assigning ranks; (xiii) performing an analysis of agreement between the scorers; (xiv) linking computed research priority scores with investment decisions; (xv) feedback and revision. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global. PMID:19090596

  16. Global mass conservation method for dual-continuum gas reservoir simulation

    Wang, Yi; Sun, Shuyu; Gong, Liang; Yu, Bo

    2018-01-01

    In this paper, we find that the numerical simulation of gas flow in dual-continuum porous media may generate unphysical or non-robust results using regular finite difference method. The reason is the unphysical mass loss caused by the gas compressibility and the non-diagonal dominance of the discretized equations caused by the non-linear well term. The well term contains the product of density and pressure. For oil flow, density is independent of pressure so that the well term is linear. For gas flow, density is related to pressure by the gas law so that the well term is non-linear. To avoid these two problems, numerical methods are proposed using the mass balance relation and the local linearization of the non-linear source term to ensure the global mass conservation and the diagonal dominance of discretized equations in the computation. The proposed numerical methods are successfully applied to dual-continuum gas reservoir simulation. Mass conservation is satisfied while the computation becomes robust. Numerical results show that the location of the production well relative to the large-permeability region is very sensitive to the production efficiency. It decreases apparently when the production well is moved from the large-permeability region to the small-permeability region, even though the well is very close to the interface of the two regions. The production well is suggested to be placed inside the large-permeability region regardless of the specific position.

  17. Global mass conservation method for dual-continuum gas reservoir simulation

    Wang, Yi

    2018-03-17

    In this paper, we find that the numerical simulation of gas flow in dual-continuum porous media may generate unphysical or non-robust results using regular finite difference method. The reason is the unphysical mass loss caused by the gas compressibility and the non-diagonal dominance of the discretized equations caused by the non-linear well term. The well term contains the product of density and pressure. For oil flow, density is independent of pressure so that the well term is linear. For gas flow, density is related to pressure by the gas law so that the well term is non-linear. To avoid these two problems, numerical methods are proposed using the mass balance relation and the local linearization of the non-linear source term to ensure the global mass conservation and the diagonal dominance of discretized equations in the computation. The proposed numerical methods are successfully applied to dual-continuum gas reservoir simulation. Mass conservation is satisfied while the computation becomes robust. Numerical results show that the location of the production well relative to the large-permeability region is very sensitive to the production efficiency. It decreases apparently when the production well is moved from the large-permeability region to the small-permeability region, even though the well is very close to the interface of the two regions. The production well is suggested to be placed inside the large-permeability region regardless of the specific position.

  18. Experimental implementation of a low-frequency global sound equalization method based on free field propagation

    Santillan, Arturo Orozco; Pedersen, Christian Sejer; Lydolf, Morten

    2007-01-01

    An experimental implementation of a global sound equalization method in a rectangular room using active control is described in this paper. The main purpose of the work has been to provide experimental evidence that sound can be equalized in a continuous three-dimensional region, the listening zone......, which occupies a considerable part of the complete volume of the room. The equalization method, based on the simulation of a progressive plane wave, was implemented in a room with inner dimensions of 2.70 m x 2.74 m x 2.40 m. With this method,the sound was reproduced by a matrix of 4 x 5 loudspeakers...... in one of the walls. After traveling through the room, the sound wave was absorbed on the opposite wall, which had a similar arrangement of loudspeakers, by means of active control. A set of 40 digital FIR filters was used to modify the original input signal before it was fed to the loudspeakers, one...

  19. Extending Particle Swarm Optimisers with Self-Organized Criticality

    Løvbjerg, Morten; Krink, Thiemo

    2002-01-01

    Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.......Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions....

  20. Operational Radiological Protection and Aspects of Optimisation

    Lazo, E.; Lindvall, C.G.

    2005-01-01

    Since 1992, the Nuclear Energy Agency (NEA), along with the International Atomic Energy Agency (IAEA), has sponsored the Information System on Occupational Exposure (ISOE). ISOE collects and analyses occupational exposure data and experience from over 400 nuclear power plants around the world and is a forum for radiological protection experts from both nuclear power plants and regulatory authorities to share lessons learned and best practices in the management of worker radiation exposures. In connection to the ongoing work of the International Commission on Radiological Protection (ICRP) to develop new recommendations, the ISOE programme has been interested in how the new recommendations would affect operational radiological protection application at nuclear power plants. Bearing in mind that the ICRP is developing, in addition to new general recommendations, a new recommendation specifically on optimisation, the ISOE programme created a working group to study the operational aspects of optimisation, and to identify the key factors in optimisation that could usefully be reflected in ICRP recommendations. In addition, the Group identified areas where further ICRP clarification and guidance would be of assistance to practitioners, both at the plant and the regulatory authority. The specific objective of this ISOE work was to provide operational radiological protection input, based on practical experience, to the development of new ICRP recommendations, particularly in the area of optimisation. This will help assure that new recommendations will best serve the needs of those implementing radiation protection standards, for the public and for workers, at both national and international levels. (author)

  1. Optimisation of surgical care for rectal cancer

    Borstlap, W.A.A.

    2017-01-01

    Optimisation of surgical care means weighing the risk of treatment related morbidity against the patients’ potential benefits of a surgical intervention. The first part of this thesis focusses on the anaemic patient undergoing colorectal surgery. Hypothesizing that a more profound haemoglobin

  2. On optimal development and becoming an optimiser

    de Ruyter, D.J.

    2012-01-01

    The article aims to provide a justification for the claim that optimal development and becoming an optimiser are educational ideals that parents should pursue in raising their children. Optimal development is conceptualised as enabling children to grow into flourishing persons, that is persons who

  3. Particle Swarm Optimisation with Spatial Particle Extension

    Krink, Thiemo; Vesterstrøm, Jakob Svaneborg; Riget, Jacques

    2002-01-01

    In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed...

  4. OPTIMISATION OF COMPRESSIVE STRENGTH OF PERIWINKLE ...

    In this paper, a regression model is developed to predict and optimise the compressive strength of periwinkle shell aggregate concrete using Scheffe's regression theory. The results obtained from the derived regression model agreed favourably with the experimental data. The model was tested for adequacy using a student ...

  5. Water distribution systems design optimisation using metaheuristics ...

    The topic of multi-objective water distribution systems (WDS) design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary ...

  6. Thermodynamic optimisation of a heat exchanger

    Cornelissen, Rene; Hirs, Gerard

    1999-01-01

    The objective of this paper is to show that for the optimal design of an energy system, where there is a trade-off between exergy saving during operation and exergy use during construction of the energy system, exergy analysis and life cycle analysis should be combined. An exergy optimisation of a

  7. Self-optimising control of sewer systems

    Mauricio Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane Loft

    2013-01-01

    . The definition of an optimal performance was carried out by through a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow, given the probability of a future rain event. The methodology is successfully applied...

  8. Optimising parallel R correlation matrix calculations on gene expression data using MapReduce.

    Wang, Shicai; Pandis, Ioannis; Johnson, David; Emam, Ibrahim; Guitton, Florian; Oehmichen, Axel; Guo, Yike

    2014-11-05

    High-throughput molecular profiling data has been used to improve clinical decision making by stratifying subjects based on their molecular profiles. Unsupervised clustering algorithms can be used for stratification purposes. However, the current speed of the clustering algorithms cannot meet the requirement of large-scale molecular data due to poor performance of the correlation matrix calculation. With high-throughput sequencing technologies promising to produce even larger datasets per subject, we expect the performance of the state-of-the-art statistical algorithms to be further impacted unless efforts towards optimisation are carried out. MapReduce is a widely used high performance parallel framework that can solve the problem. In this paper, we evaluate the current parallel modes for correlation calculation methods and introduce an efficient data distribution and parallel calculation algorithm based on MapReduce to optimise the correlation calculation. We studied the performance of our algorithm using two gene expression benchmarks. In the micro-benchmark, our implementation using MapReduce, based on the R package RHIPE, demonstrates a 3.26-5.83 fold increase compared to the default Snowfall and 1.56-1.64 fold increase compared to the basic RHIPE in the Euclidean, Pearson and Spearman correlations. Though vanilla R and the optimised Snowfall outperforms our optimised RHIPE in the micro-benchmark, they do not scale well with the macro-benchmark. In the macro-benchmark the optimised RHIPE performs 2.03-16.56 times faster than vanilla R. Benefiting from the 3.30-5.13 times faster data preparation, the optimised RHIPE performs 1.22-1.71 times faster than the optimised Snowfall. Both the optimised RHIPE and the optimised Snowfall successfully performs the Kendall correlation with TCGA dataset within 7 hours. Both of them conduct more than 30 times faster than the estimated vanilla R. The performance evaluation found that the new MapReduce algorithm and its

  9. Stochastic Automata for Outdoor Semantic Mapping using Optimised Signal Quantisation

    Caponetti, Fabio; Blas, Morten Rufus; Blanke, Mogens

    2011-01-01

    Autonomous robots require many types of information to obtain intelligent and safe behaviours. For outdoor operations, semantic mapping is essential and this paper proposes a stochastic automaton to localise the robot within the semantic map. For correct modelling and classi¯cation under...... uncertainty, this paper suggests quantising robotic perceptual features, according to a probabilistic description, and then optimising the quantisation. The proposed method is compared with other state-of-the-art techniques that can assess the con¯dence of their classi¯cation. Data recorded on an autonomous...

  10. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  11. Optimisation of Transmission Systems by use of Phase Shifting Transformers

    Verboomen, J

    2008-10-13

    In this thesis, transmission grids with PSTs (Phase Shifting Transformers) are investigated. In particular, the following goals are put forward: (a) The analysis and quantification of the impact of a PST on a meshed grid. This includes the development of models for the device; (b) The development of methods to obtain optimal coordination of several PSTs in a meshed grid. An objective function should be formulated, and an optimisation method must be adopted to solve the problem; and (c) The investigation of different strategies to use a PST. Chapter 2 gives a short overview of active power flow controlling devices. In chapter 3, a first step towards optimal PST coordination is taken. In chapter 4, metaheuristic optimisation methods are discussed. Chapter 5 introduces DC load flow approximations, leading to analytically closed equations that describe the relation between PST settings and active power flows. In chapter 6, some applications of the methods that are developed in earlier chapters are presented. Chapter 7 contains the conclusions of this thesis, as well as recommendations for future work.

  12. Analysis and Optimisation of Carcass Production for Flexible Pipes

    Nielsen, Peter Søe

    Un-bonded flexible pipes are used in the offshore oil and gas industry worldwide transporting hydrocarbons from seafloor to floating production vessels topside. Flexible pipes are advantageous over rigid pipelines in dynamic applications and during installation as they are delivered in full length......-axial tension FLC points were attained. Analysis of weld fracture of duplex stainless steel EN 1.4162 is carried out determining strains with GOM ARAMIS automated strain measurement system, which shows that strain increases faster in the weld zone than the global strain of the parent material. Fracture...... is the analysis and optimisation of the carcass manufacturing process by means of a fundamental investigation in the fields of formability, failure modes / mechanisms, Finite Element Analysis (FEA), simulative testing and tribology. A study of failure mechanisms in carcass production is performed by being present...

  13. Optimisation of liquid scintillation counting conditions to determine low activity levels of tritium and radiostrontium in aqueous solution

    Rauret, Gemma; Mestres, J.S.; Ribera, Merce; Rajadel, Pilar (Barcelona Univ. (Spain). Dept. de Quimica Analitica)

    1990-08-01

    An optimisation of the counting conditions for the measurement of aqueous solutions of tritium or radiostrontium using Insta-Gel II as scintillator is presented. The variables optimised were the window, the ratio of mass of sample to mass of scintillator and the total volume of the counting mixture. An optimisation function which takes into account each of these variables, the background and also the efficiency is proposed. The conditions established allow the lowest possible detection limit to be reached. For tritium, this value was compared with that obtained when the standard method for water analysis was applied. (author).

  14. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems

  15. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...

  16. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities......) and cuts....

  17. Method of steering the gain of a multiple antenna global positioning system receiver

    Evans, Alan G.; Hermann, Bruce R.

    1992-06-01

    A method for steering the gain of a multiple antenna Global Positioning System (GPS) receiver toward a plurality of a GPS satellites simultaneously is provided. The GPS signals of a known wavelength are processed digitally for a particular instant in time. A range difference or propagation delay between each antenna for GPS signals received from each satellite is first resolved. The range difference consists of a fractional wavelength difference and an integer wavelength difference. The fractional wavelength difference is determined by each antenna's tracking loop. The integer wavelength difference is based upon the known wavelength and separation between each antenna with respect to each satellite position. The range difference is then used to digitally delay the GPS signals at each antenna with respect to a reference antenna. The signal at the reference antenna is then summed with the digitally delayed signals to generate a composite antenna gain. The method searches for the correct number of integer wavelengths to maximize the composite gain. The range differences are also used to determine the attitude of the array.

  18. The optimisation of a water distribution system using Bentley WaterGEMS software

    Świtnicka Karolina

    2017-01-01

    Full Text Available The proper maintenance of water distribution systems (WDSs requires from operators multiple actions in order to ensure optimal functioning. Usually, all requirements should be adjusted simultaneously. Therefore, the decision-making process is often supported by multi-criteria optimisation methods. Significant improvements of exploitation conditions of WDSs functioning can be achieved by connecting small water supply networks into group systems. Among many potential tools supporting advanced maintenance and management of WDSs, significant improvements have tools that can find the optimal solution by the implemented mechanism of metaheuristic methods, such as the genetic algorithm. In this paper, an exemplary WDS functioning optimisation is presented, in relevance to a group water supply system. The action range of optimised parameters included: maximisation of water flow velocity, regulation of pressure head, minimisation of water retention time in a network (water age and minimisation of pump energy consumption. All simulations were performed in Bentley WaterGEMS software.

  19. A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm

    Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman

    2014-01-01

    In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109

  20. Optimised performance of industrial high resolution computerised tomography

    Maangaard, M.

    2000-01-01

    The purpose of non-destructive evaluation (NDE) is to acquire knowledge of the investigated sample. Digital x-ray imaging techniques such as radiography or computerised tomography (CT) produce images of the interior of a sample. The obtained image quality determines the possibility of detecting sample related features, e.g. details and flaws. This thesis presents a method of optimising the performance of industrial X-ray equipment for the imaging task at issue in order to obtain images with high quality. CT produces maps of the X-ray linear attenuation of the sample's interior. CT can produce two dimensional cross-section images or three-dimensional images with volumetric information on the investigated sample. The image contrast and noise depend on both the investigated sample and the equipment and settings used (X-ray tube potential, X-ray filtration, exposure time, etc.). Hence, it is vital to find the optimal equipment settings in order to obtain images of high quality. To be able to mathematically optimise the image quality, it is necessary to have a model of the X-ray imaging system together with an appropriate measure of image quality. The optimisation is performed with a developed model for an X-ray image-intensifier-based radiography system. The model predicts the mean value and variance of the measured signal level in the collected radiographic images. The traditionally used measure of physical image quality is the signal-to-noise ratio (SNR). To calculate the signal-to-noise ratio, a well-defined detail (flaw) is required. It was found that maximising the SNR leads to ambiguities, the optimised settings found by maximising the SNR were dependent on the material in the detail. When CT is performed on irregular shaped samples containing density and compositional variations, it is difficult to define which SNR to use for optimisation. This difficulty is solved by the measures of physical image quality proposed here, the ratios geometry

  1. Real-time optimisation of the Hoa Binh reservoir, Vietnam

    Richaud, Bertrand; Madsen, Henrik; Rosbjerg, Dan

    2011-01-01

    -time optimisation. First, the simulation-optimisation framework is applied for optimising reservoir operating rules. Secondly, real-time and forecast information is used for on-line optimisation that focuses on short-term goals, such as flood control or hydropower generation, without compromising the deviation...... in the downstream part of the Red River, and at the same time to increase hydropower generation and to save water for the dry season. The real-time optimisation procedure further improves the efficiency of the reservoir operation and enhances the flexibility for the decision-making. Finally, the quality......Multi-purpose reservoirs often have to be managed according to conflicting objectives, which requires efficient tools for trading-off the objectives. This paper proposes a multi-objective simulation-optimisation approach that couples off-line rule curve optimisation with on-line real...

  2. Approaches and challenges to optimising primary care teams’ electronic health record usage

    Nancy Pandhi

    2014-07-01

    Full Text Available Background Although the presence of an electronic health record (EHR alone does not ensure high quality, efficient care, few studies have focused on the work of those charged with optimising use of existing EHR functionality.Objective To examine the approaches used and challenges perceived by analysts supporting the optimisation of primary care teams’ EHR use at a large U.S. academic health care system.Methods A qualitative study was conducted. Optimisation analysts and their supervisor were interviewed and data were analysed for themes.Results Analysts needed to reconcile the tension created by organisational mandates focused on the standardisation of EHR processes with the primary care teams’ demand for EHR customisation. They gained an understanding of health information technology (HIT leadership’s and primary care team’s goals through attending meetings, reading meeting minutes and visiting with clinical teams. Within what was organisationally possible, EHR education could then be tailored to fit team needs. Major challenges were related to organisational attempts to standardise EHR use despite varied clinic contexts, personnel readiness and technical issues with the EHR platform. Forcing standardisation upon clinical needs that current EHR functionality could not satisfy was difficult.Conclusions Dedicated optimisation analysts can add value to health systems through playing a mediating role between HIT leadership and care teams. Our findings imply that EHR optimisation should be performed with an in-depth understanding of the workflow, cognitive and interactional activities in primary care.

  3. Reduction environmental effects of civil aircraft through multi-objective flight plan optimisation

    Lee, D S; Gonzalez, L F; Walker, R; Periaux, J; Onate, E

    2010-01-01

    With rising environmental alarm, the reduction of critical aircraft emissions including carbon dioxides (CO 2 ) and nitrogen oxides (NO x ) is one of most important aeronautical problems. There can be many possible attempts to solve such problem by designing new wing/aircraft shape, new efficient engine, etc. The paper rather provides a set of acceptable flight plans as a first step besides replacing current aircrafts. The paper investigates a green aircraft design optimisation in terms of aircraft range, mission fuel weight (CO 2 ) and NO x using advanced Evolutionary Algorithms coupled to flight optimisation system software. Two multi-objective design optimisations are conducted to find the best set of flight plans for current aircrafts considering discretised altitude and Mach numbers without designing aircraft shape and engine types. The objectives of first optimisation are to maximise range of aircraft while minimising NO x with constant mission fuel weight. The second optimisation considers minimisation of mission fuel weight and NO x with fixed aircraft range. Numerical results show that the method is able to capture a set of useful trade-offs that reduce NO x and CO 2 (minimum mission fuel weight).

  4. A Study on Efficiency Improvement of the Hybrid Monte Carlo/Deterministic Method for Global Transport Problems

    Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung

    2017-01-01

    In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.

  5. Training the next generation of global health advocates through experiential education: A mixed-methods case study evaluation.

    Hoffman, Steven J; Silverberg, Sarah L

    2015-10-15

    This case study evaluates a global health education experience aimed at training the next generation of global health advocates. Demand and interest in global health among Canadian students is well documented, despite the difficulty in integrating meaningful experiences into curricula. Global health advocacy was taught to 19 undergraduate students at McMaster University through an experiential education course, during which they developed a national advocacy campaign on global access to medicines. A quantitative survey and an analysis of social network dynamics were conducted, along with a qualitative analysis of written work and course evaluations. Data were interpreted through a thematic synthesis approach. Themes were identified related to students' learning outcomes, experience and class dynamics. The experiential education format helped students gain authentic, real-world experience in global health advocacy and leadership. The tangible implications for their course work was a key motivating factor. While experiential education is an effective tool for some learning outcomes, it is not suitable for all. As well, group dynamics and evaluation methods affect the learning environment. Real-world global health issues, public health practice and advocacy approaches can be effectively taught through experiential education, alongside skills like communication and professionalism. Students developed a nuanced understanding of many strategies, challenges and barriers that exist in advocating for public health ideas. These experiences are potentially empowering and confidence-building despite the heavy time commitment they require. Attention should be given to how such experiences are designed, as course dynamics and grading structure significantly influence students' experience.

  6. Fractures in sport: Optimising their management and outcome

    Robertson, Greg AJ; Wood, Alexander M

    2015-01-01

    Fractures in sport are a specialised cohort of fracture injuries, occurring in a high functioning population, in which the goals are rapid restoration of function and return to play with the minimal symptom profile possible. While the general principles of fracture management, namely accurate fracture reduction, appropriate immobilisation and timely rehabilitation, guide the treatment of these injuries, management of fractures in athletic populations can differ significantly from those in the general population, due to the need to facilitate a rapid return to high demand activities. However, despite fractures comprising up to 10% of all of sporting injuries, dedicated research into the management and outcome of sport-related fractures is limited. In order to assess the optimal methods of treating such injuries, and so allow optimisation of their outcome, the evidence for the management of each specific sport-related fracture type requires assessment and analysis. We present and review the current evidence directing management of fractures in athletes with an aim to promote valid innovative methods and optimise the outcome of such injuries. From this, key recommendations are provided for the management of the common fracture types seen in the athlete. Six case reports are also presented to illustrate the management planning and application of sport-focussed fracture management in the clinical setting. PMID:26716081

  7. Optimisation of Storage for Concentrated Solar Power Plants

    Luigi Cirocco

    2014-12-01

    Full Text Available The proliferation of non-scheduled generation from renewable electrical energy sources such concentrated solar power (CSP presents a need for enabling scheduled generation by incorporating energy storage; either via directly coupled Thermal Energy Storage (TES or Electrical Storage Systems (ESS distributed within the electrical network or grid. The challenges for 100% renewable energy generation are: to minimise capitalisation cost and to maximise energy dispatch capacity. The aims of this review article are twofold: to review storage technologies and to survey the most appropriate optimisation techniques to determine optimal operation and size of storage of a system to operate in the Australian National Energy Market (NEM. Storage technologies are reviewed to establish indicative characterisations of energy density, conversion efficiency, charge/discharge rates and costings. A partitioning of optimisation techniques based on methods most appropriate for various time scales is performed: from “whole of year”, seasonal, monthly, weekly and daily averaging to those best suited matching the NEM bid timing of five minute dispatch bidding, averaged on the half hour as the trading settlement spot price. Finally, a selection of the most promising research directions and methods to determine the optimal operation and sizing of storage for renewables in the grid is presented.

  8. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  9. A TWO LEVEL ARCHITECTURE USING CONSENSUS METHOD FOR GLOBAL DECISION MAKING AGAINST DDoS ATTACKS

    S.Seetha

    2010-06-01

    Full Text Available Distributed Denial of service is a major threat to the availability of internet services. Due to the distributed, large scale nature of the Internet makes DDoS (Distributed Denial-of-Service attacks stealthy and difficult to counter. Defense against Distributed Denial- of -Service attacks is one of the hardest security problems on the Internet. Recently these network attacks have been increasing. Therefore more effective countermeasures are required to counter the threat. This requirement has motivated us to propose a novel mechanism against DDoS attack. This paper presents the design details of a distributed defense mechanism against DDoS attack. In our approach, the egress routers of the intermediate network coordinate with each other to provide the information necessary to detect and respond to the attack. Thus, a detection system based on single site will have either high positive or high negative rates. Unlike the traditional IDSs (Intrusion Detection System this method has the potential to achieve high true positive ratio. This work has been done by using consensus algorithms for exchanging the information between the detection systems. So the overall detection time would be reduced for global decision making.

  10. “Contrapuntal Reading” as a Method, an Ethos, and a Metaphor for Global IR

    Bilgin, Pinar

    2016-01-01

    How to approach Global International Relations (IR)? This is a question asked by students of IR who recognize the limits of our field while expressing their concern that those who strive for a Global IR have been less-than-clear about the “how to?” question. In this article, I point to Edward W. ...

  11. Acoustic Resonator Optimisation for Airborne Particle Manipulation

    Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian

    Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.

  12. Pre-operative optimisation of lung function

    Naheed Azhar

    2015-01-01

    Full Text Available The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function.

  13. Global calculation of PWR reactor core using the two group energy solution by the response matrix method

    Conti, C.F.S.; Watson, F.V.

    1991-01-01

    A computational code to solve a two energy group neutron diffusion problem has been developed base d on the Response Matrix Method. That method solves the global problem of PWR core, without using the cross sections homogenization process, thus it is equivalent to a pontwise core calculation. The present version of the code calculates the response matrices by the first order perturbative method and considers developments on arbitrary order Fourier series for the boundary fluxes and interior fluxes. (author)

  14. The Global Survey Method Applied to Ground-level Cosmic Ray Measurements

    Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.

    2018-04-01

    The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.

  15. Optimised dipper fine tunes shovel performance

    Fiscor, S.

    2005-06-01

    Joint efforts between mine operators, OEMs, and researchers yields unexpected benefits from dippers for shovels for coal, oil, or hardrock mining that can now be tailored to meet site-specific conditions. The article outlines a process being developed by CRCMining and P & H MIning Equipment to optimise the dipper that involves rapid prototyping and scale modelling of the dipper and the mine conditions. Scale models have been successfully field tested. 2 photos.

  16. Public transport optimisation emphasising passengers’ travel behaviour.

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    Passengers in public transport complaining about their travel experiences are not uncommon. This might seem counterintuitive since several operators worldwide are presenting better key performance indicators year by year. The present PhD study focuses on developing optimisation algorithms to enhance the operations of public transport while explicitly emphasising passengers’ travel behaviour and preferences. Similar to economic theory, interactions between supply and demand are omnipresent in ...

  17. ATLAS software configuration and build tool optimisation

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of

  18. Inferring global upper-mantle shear attenuation structure by waveform tomography using the spectral element method

    Karaoǧlu, Haydar; Romanowicz, Barbara

    2018-06-01

    We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the

  19. Parametric studies and optimisation of pumped thermal electricity storage

    McTigue, Joshua D.; White, Alexander J.; Markides, Christos N.

    2015-01-01

    Highlights: • PTES is modelled by cycle analysis and a Schumann-style model of the thermal stores. • Optimised trade-off surfaces show a flat efficiency vs. energy density profile. • Overall roundtrip efficiencies of around 70% are not inconceivable. - Abstract: Several of the emerging technologies for electricity storage are based on some form of thermal energy storage (TES). Examples include liquid air energy storage, pumped heat energy storage and, at least in part, advanced adiabatic compressed air energy storage. Compared to other large-scale storage methods, TES benefits from relatively high energy densities, which should translate into a low cost per MW h of storage capacity and a small installation footprint. TES is also free from the geographic constraints that apply to hydro storage schemes. TES concepts for electricity storage rely on either a heat pump or refrigeration cycle during the charging phase to create a hot or a cold storage space (the thermal stores), or in some cases both. During discharge, the thermal stores are depleted by reversing the cycle such that it acts as a heat engine. The present paper is concerned with a form of TES that has both hot and cold packed-bed thermal stores, and for which the heat pump and heat engine are based on a reciprocating Joule cycle, with argon as the working fluid. A thermodynamic analysis is presented based on traditional cycle calculations coupled with a Schumann-style model of the packed beds. Particular attention is paid to the various loss-generating mechanisms and their effect on roundtrip efficiency and storage density. A parametric study is first presented that examines the sensitivity of results to assumed values of the various loss factors and demonstrates the rather complex influence of the numerous design variables. Results of an optimisation study are then given in the form of trade-off surfaces for roundtrip efficiency, energy density and power density. The optimised designs show a

  20. Natural Erosion of Sandstone as Shape Optimisation.

    Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan

    2017-12-11

    Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.

  1. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  2. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  3. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  4. Optimisation of electrical system for offshore wind farms via genetic algorithm

    Chen, Zhe; Zhao, Menghua; Blaabjerg, Frede

    2009-01-01

    An optimisation platform based on genetic algorithm (GA) is presented, where the main components of a wind farm and key technical specifications are used as input parameters and the electrical system design of the wind farm is optimised in terms of both production cost and system reliability....... The power losses, wind power production, initial investment and maintenance costs are considered in the production cost. The availability of components and network redundancy are included in the reliability evaluation. The method of coding an electrical system to a binary string, which is processed by GA......, is developed. Different GA techniques are investigated based on a real example offshore wind farm. This optimisation platform has been demonstrated as a powerful tool for offshore wind farm design and evaluation....

  5. A novel low-power fluxgate sensor using a macroscale optimisation technique for space physics instrumentation

    Dekoulis, G.; Honary, F.

    2007-05-01

    This paper describes the design of a novel low-power single-axis fluxgate sensor. Several soft magnetic alloy materials have been considered and the choice was based on the balance between maximum permeability and minimum saturation flux density values. The sensor has been modelled using the Finite Integration Theory (FIT) method. The sensor was imposed to a custom macroscale optimisation technique that significantly reduced the power consumption by a factor of 16. The results of the sensor's optimisation technique will be used, subsequently, in the development of a cutting-edge ground based magnetometer for the study of the complex solar wind-magnetospheric-ionospheric system.

  6. Easy-to-use procedure to optimise a chromatographic method. Application in the determination of bisphenol-A and phenol in toys by means of liquid chromatography with fluorescence detection.

    Arce, M M; Sanllorente, S; Ortiz, M C; Sarabia, L A

    2018-01-26

    Legal limits for phenol and bisphenol-A (BPA) in toys are 15 and 0.1 mg L -1 respectively. The latest studies show that in Europe the content of BPA, which reaches our bodies through different contact routes, in no cases exceed legal limits. But it is true that the effects caused by continued intake of this analyte for a long time and other possible processes that could increase their migration, are still under consideration by the health agencies responsible. A multiresponse optimization using a D-optimal design for simultaneously optimising two experimental factors (temperature and flow) at three levels and one (mobile phase composition) at four levels, in the determination by means of HPLC-FLD is proposed in this work. The D-optimal design allows ones to reduce the experimental effort from 36 to 11 experiments guaranteeing the quality of the estimates. The model fitted is validated and, after the responses are estimated in the whole experimental domain, the experimental conditions that maximize peak areas and minimize retention times for both analytes are chosen by means of a Pareto front. In this way, the sensitivity and the time of the analysis have been improved with this optimization. Decision limit and capability of detection at the limits obtained were 33.9 and 66.1 μg L -1 for phenol and 25.6 and 50.0 for BPA μg L -1 respectively when the probabilities of false negative and false positive were fixed at 0.05. The procedure has been successfully applied to determine phenol and BPA in different samples (toys, clinical serum bags and artificial tears). The simulants HCl 0.07 M and water were used for the analysis of toys. The quantity of phenol found in serum bags and in artificial tears ranged from 15 to 600 μg L -1 . No BPA has been found in the objects analysed. In addition, this work incorporates computer programmes which implement the procedure used (COOrdinates parallel plot and Pareto FROnt, COO-FRO) such that it can be used in any

  7. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  8. Understanding Factors that Shape Gender Attitudes in Early Adolescence Globally: A Mixed-Methods Systematic Review

    Gibbs, Susannah; Blum, Robert Wm; Moreau, Caroline; Chandra-Mouli, Venkatraman; Herbert, Ann; Amin, Avni

    2016-01-01

    Background Early adolescence (ages 10–14) is a period of increased expectations for boys and girls to adhere to socially constructed and often stereotypical norms that perpetuate gender inequalities. The endorsement of such gender norms is closely linked to poor adolescent sexual and reproductive and other health-related outcomes yet little is known about the factors that influence young adolescents’ personal gender attitudes. Objectives To explore factors that shape gender attitudes in early adolescence across different cultural settings globally. Methods A mixed-methods systematic review was conducted of the peer-reviewed literature in 12 databases from 1984–2014. Four reviewers screened the titles and abstracts of articles and reviewed full text articles in duplicate. Data extraction and quality assessments were conducted using standardized templates by study design. Thematic analysis was used to synthesize quantitative and qualitative data organized by the social-ecological framework (individual, interpersonal and community/societal-level factors influencing gender attitudes). Results Eighty-two studies (46 quantitative, 31 qualitative, 5 mixed-methods) spanning 29 countries were included. Ninety percent of studies were from North America or Western Europe. The review findings indicate that young adolescents, across cultural settings, commonly express stereotypical or inequitable gender attitudes, and such attitudes appear to vary by individual sociodemographic characteristics (sex, race/ethnicity and immigration, social class, and age). Findings highlight that interpersonal influences (family and peers) are central influences on young adolescents’ construction of gender attitudes, and these gender socialization processes differ for boys and girls. The role of community factors (e.g. media) is less clear though there is some evidence that schools may reinforce stereotypical gender attitudes among young adolescents. Conclusions The findings from this

  9. Global Scale Exploration Seismics: Mapping Mantle Discontinuities with Inverse Scattering Methods and Millions of Seismograms

    van der Hilst, R. D.; de Hoop, M. V.; Shim, S. H.; Shang, X.; Wang, P.; Cao, Q.

    2012-04-01

    Over the past three decades, tremendous progress has been made with the mapping of mantle heterogeneity and with the understanding of these structures in terms of, for instance, the evolution of Earth's crust, continental lithosphere, and thermo-chemical mantle convection. Converted wave imaging (e.g., receiver functions) and reflection seismology (e.g. SS stacks) have helped constrain interfaces in crust and mantle; surface wave dispersion (from earthquake or ambient noise signals) characterizes wavespeed variations in continental and oceanic lithosphere, and body wave and multi-mode surface wave data have been used to map trajectories of mantle convection and delineate mantle regions of anomalous elastic properties. Collectively, these studies have revealed substantial ocean-continent differences and suggest that convective flow is strongly influenced by but permitted to cross the upper mantle transition zone. Many questions have remained unanswered, however, and further advances in understanding require more accurate depictions of Earth's heterogeneity at a wider range of length scales. To meet this challenge we need new observations—more, better, and different types of data—and methods that help us extract and interpret more information from the rapidly growing volumes of broadband data. The huge data volumes and the desire to extract more signal from them means that we have to go beyond 'business as usual' (that is, simplified theory, manual inspection of seismograms, …). Indeed, it inspires the development of automated full wave methods, both for tomographic delineation of smooth wavespeed variations and the imaging (for instance through inverse scattering) of medium contrasts. Adjoint tomography and reverse time migration, which are closely related wave equation methods, have begun to revolutionize seismic inversion of global and regional waveform data. In this presentation we will illustrate this development - and its promise - drawing from our work

  10. Optimization of fuel cycles: marginal loss values; Optimisation des cycles de combustibles: valeurs marginales des pertes

    Gaussens, J [Commissariat a l' Energie Atomique, 75 - Paris (France); Lasteyrie, B de; Doumerc, J [Compagnie pour l' Etude et la Realisation de Combustibles Atomiques, 75 - Paris (France)

    1965-07-01

    comme definitivement perdue, alors que le reste pourrait etre recupere et recycle. Le cout eleve des pertes, recyclees ou non, d'autant plus eleve que l'uranium est plus enrichi, exige qu'il en soit tenu compte dans l'optimisation generale des cycles de combustible. Il importe donc de determiner leur niveau le plus souhaitable economiquement, aux diverses etapes d'elaboration du combustible nucleaire. Mais en France et dans d'autres pays, la production de matieres fissiles est geree par l'Etat, tandis que la fabrication de l'element combustible est effectuee par l'industrie privee. Les criteres d'optimisation et l'interet economique accorde aux pertes sont donc differents pour les deux parties de la chaine de fabrication. Pour tenter neanmoins d'atteindre un optimum conforme a l'interet collectif sans intervenir dans la politique de prix de l'entreprise, on peut utiliser la propriete des couts marginaux d'etre egaux entre eux a l'optimum, pour un volume de production donne. On peut donc ajuster le niveau des pertes pour realiser cette egalite des couts marginaux dont le calcul est plus facile a obtenir de la firme que la justification des prix eux-memes. On s'apercoit d'ailleurs que, bien qu'axee essentiellement sur les pertes, cette analyse globale peut conduire a une meilleure utilisation d'autres facteurs de production. On donne un expose theorique et des exemples pratiques de cette methode d'optimisation economique dans le cadre de la fabrication d'elements combustibles destines a des reacteurs du type: uranium naturel, moderes au graphite et refroidis par le gaz carbonique. (auteurs)

  11. Structural Optimisation Of Payload Fairings

    Santschi, Y.; Eaton, N.; Verheyden, S.; Michaud, V.

    2012-07-01

    RUAG Space are developing materials and processing technologies for manufacture of the Next Generation Launcher (NGL) payload fairing, together with the Laboratory of Polymer and Composite Technology at the EPFL, in a project running under the ESA Future Launchers Preparatory Program (FLPP). In this paper the general aims and scope of the project are described, details of the results obtained shall be presented at a later stage. RUAG Space design, develop and manufacture fairings for the European launch vehicles Ariane 5 and VEGA using well proven composite materials and production methods which provide adequate cost/performance ratio for these applications. However, the NGL shall make full use of innovations in materials and process technologies to achieve a gain in performance at a much reduced overall manufacturing cost. NGL is scheduled to become operational in 2025, with actual development beginning in 2014. In this current project the basic technology is being developed and validated, in readiness for application in the NGL. For this new application, an entirely new approach to the fairing manufacture is evaluated.

  12. Optimisation Of Volvariella volvacea Spawn Production For High Mycelia Growth

    Nie, H.J.; Azhar Mohamad; Joyce, Y.S.M.; Muhamad Najmi Moin

    2016-01-01

    Volvariella volvacea is a tropical mushroom cultivated throughout the world mostly in Asia and Europe for its taste, nutrition, fast growing nature and simple method of cultivation. It is also widely cultivated in Malaysia and known locally as Paddy Straw Mushroom. Growers strive to obtain high quality mushroom spawn to achieve high production. High quality spawn with fast mycelia growth are achieved through strain selection and optimisation of seedling production conditions. This study aims to optimise spawn production through 3 studies along the production main stages. Firstly, comparisons were made between agar media. Mycelia grown on Malt Extract Agar (MEA) covered the agar plates 2 days faster than Potato Dextrose Agar (PDA) grown mycelia. The next stage is of mother grain spawn production by inoculating a bag of grains from mycelia agar plugs. Bags are then sealed with caps. The amount of aeration for grain spawn were studied by using two types of caps. Caps with hole allowed more aeration compared to caps without holes. Study determined bags with more aeration colonised grains 2 days faster than bags with lesser aeration. For the final stage of producing planting spawn, the growth on paddy straw substrate with and without kapok (10 %) were evaluated. Results revealed mycelia fully colonised the substrate bags without kapok 1-2 days faster than substrate bags with kapok. These findings contribute in achieving production of high quality Volvariella volvacea spawn. (author)

  13. ICT for whole life optimisation of residential buildings

    Haekkinen, T.; Vares, S.; Huovila, P.; Vesikari, E.; Porkka, J. (VTT Technical Research Centre of Finland, Espoo (FI)); Nilsson, L.-O.; Togeroe, AA. (Lund University (SE)); Jonsson, C.; Suber, K. (Skanska Sverige AB (SE)); Andersson, R.; Larsson, R. (Cementa, Malmoe (SE)); Nuorkivi, I. (Skanska Oyj, Helsinki (FI))

    2007-08-15

    The research project 'ICT for whole life optimisation of residential buildings' (ICTWLORB) developed and tested the whole life design and optimisation methods for residential buildings. The objective of the ICTWLORB project was to develop and implement an ICT based tool box for integrated life cycle design of residential building. ICTWLORB was performed in cooperation with Swedish and Finnish partners. The ICTWLORB project defined as a premise that an industrialised building process is characterised by two main elements: a building concept based approach and efficient information management. Building concept based approach enables (1) the product development of the end product, (2) repetition of the basic elements of the building from one project to others and (3) customisation of the end-product considering the specific needs of the case and the client. Information management enables (1) the consideration of wide spectrum of aspects including building performance, environmental aspects, life cycle costs and service life, and (2) rapid adapting of the design to the specific requirements of the case. (orig.)

  14. Infrastructure optimisation via MBR retrofit: a design guide.

    Bagg, W K

    2009-01-01

    Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.

  15. Modelling and genetic algorithm based optimisation of inverse supply chain

    Bányai, T.

    2009-04-01

    The design and control of recycling systems of products with environmental risk have been discussed in the world already for a long time. The main reasons to address this subject are the followings: reduction of waste volume, intensification of recycling of materials, closing the loop, use of less resource, reducing environmental risk [1, 2]. The development of recycling systems is based on the integrated solution of technological and logistic resources and know-how [3]. However the financial conditions of recycling systems is partly based on the recovery, disassembly and remanufacturing options of the used products [4, 5, 6], but the investment and operation costs of recycling systems can be characterised with high logistic costs caused by the geographically wide collection system with more collection level and a high number of operation points of the inverse supply chain. The reduction of these costs is a popular area of the logistics researches. These researches include the design and implementation of comprehensive environmental waste and recycling program to suit business strategies (global system), design and supply all equipment for production line collection (external system), design logistics process to suit the economical and ecological requirements (external system) [7]. To the knowledge of the author, there has been no research work on supply chain design problems that purpose is the logistics oriented optimisation of inverse supply chain in the case of non-linear total cost function consisting not only operation costs but also environmental risk cost. The antecedent of this research is, that the author has taken part in some research projects in the field of closed loop economy ("Closing the loop of electr(on)ic products and domestic appliances from product planning to end-of-life technologies), environmental friendly disassembly (Concept for logistical and environmental disassembly technologies) and design of recycling systems of household appliances

  16. Sensitivity of potential evapotranspiration estimation to the Thornthwaite and Penman-Monteith methods in the study of global drylands

    Yang, Qing; Ma, Zhuguo; Zheng, Ziyan; Duan, Yawen

    2017-12-01

    Drylands are among those regions most sensitive to climate and environmental changes and human-induced perturbations. The most widely accepted definition of the term dryland is a ratio, called the Surface Wetness Index (SWI), of annual precipitation to potential evapotranspiration (PET) being below 0.65. PET is commonly estimated using the Thornthwaite (PET Th) and Penman-Monteith equations (PET PM). The present study compared spatiotemporal characteristics of global drylands based on the SWI with PET Th and PET PM. Results showed vast differences between PET Th and PET PM; however, the SWI derived from the two kinds of PET showed broadly similar characteristics in the interdecadal variability of global and continental drylands, except in North America, with high correlation coefficients ranging from 0.58 to 0.89. It was found that, during 1901-2014, global hyper-arid and semi-arid regions expanded, arid and dry sub-humid regions contracted, and drylands underwent interdecadal fluctuation. This was because precipitation variations made major contributions, whereas PET changes contributed to a much lesser degree. However, distinct differences in the interdecadal variability of semi-arid and dry sub-humid regions were found. This indicated that the influence of PET changes was comparable to that of precipitation variations in the global dry-wet transition zone. Additionally, the contribution of PET changes to the variations in global and continental drylands gradually enhanced with global warming, and the Thornthwaite method was found to be increasingly less applicable under climate change.

  17. Review article – Optimisation of exposure parameters for spinal curvature measurements in paediatric radiography

    de Haan, Seraphine; Reis, Cláudia; Ndlovu, Junior; Serrenho, Catarina; Akhtar, Ifrah; Garcia, José Antonio; Linde, Daniël; Thorskog, Martine; Franco, Loris; Hogg, Peter

    2015-01-01

    This review aims to identify strategies to optimise radiography practice using digital technologies, for full spine studies on paediatrics focusing particularly on methods used to diagnose and measure severity of spinal curvatures. The literature search was performed on different databases (PubMed,

  18. Optimisation of timetable-based, stochastic transit assignment models based on MSA

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr

    2006-01-01

    (CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...

  19. An Optimisation Approach Applied to Design the Hydraulic Power Supply for a Forklift Truck

    Pedersen, Henrik Clemmensen; Andersen, Torben Ole; Hansen, Michael Rygaard

    2004-01-01

    -level optimisation approach, and is in the current paper exemplified through the design of the hydraulic power supply for a forklift truck. The paper first describes the prerequisites for the method and then explains the different steps in the approach to design the hydraulic system. Finally the results...

  20. Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.

    Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P

    2017-01-01

    To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.

  1. Sizing Combined Heat and Power Units and Domestic Building Energy Cost Optimisation

    Dongmin Yu

    2017-06-01

    Full Text Available Many combined heat and power (CHP units have been installed in domestic buildings to increase energy efficiency and reduce energy costs. However, inappropriate sizing of a CHP may actually increase energy costs and reduce energy efficiency. Moreover, the high manufacturing cost of batteries makes batteries less affordable. Therefore, this paper will attempt to size the capacity of CHP and optimise daily energy costs for a domestic building with only CHP installed. In this paper, electricity and heat loads are firstly used as sizing criteria in finding the best capacities of different types of CHP with the help of the maximum rectangle (MR method. Subsequently, the genetic algorithm (GA will be used to optimise the daily energy costs of the different cases. Then, heat and electricity loads are jointly considered for sizing different types of CHP and for optimising the daily energy costs through the GA method. The optimisation results show that the GA sizing method gives a higher average daily energy cost saving, which is 13% reduction compared to a building without installing CHP. However, to achieve this, there will be about 3% energy efficiency reduction and 7% input power to rated power ratio reduction compared to using the MR method and heat demand in sizing CHP.

  2. Topology optimised photonic crystal waveguide intersections with high-transmittance and low crosstalk

    Ikeda, N; Sugimoto, Y; Watanabe, Y

    2006-01-01

    Numerical and experimental studies on the photonic crystal waveguide intersection based on the topology optimisation design method are reported and the effectiveness of this technique is shown by achieving high transmittance spectra with low crosstalk for the straightforward beam-propagation line...

  3. Environmental optimisation of waste combustion

    Schuster, Robert [AaF Energikonsult, Stockholm (Sweden); Berge, Niclas; Stroemberg, Birgitta [TPS Termiska Processer AB, Nykoeping (Sweden)

    2000-12-01

    The regulations concerning waste combustion evolve through R and D and a strive to get better and common regulations for the European countries. This study discusses if these rules of today concerning oxygen concentration, minimum temperature and residence time in the furnace and the use of stand-by burners are needed, are possible to monitor, are the optimum from an environmental point of view or could be improved. No evidence from well controlled laboratory experiments validate that 850 deg C in 6 % oxygen content in general is the best lower limit. A lower excess air level increase the temperature, which has a significant effect on the destruction of hydrocarbons, favourably increases the residence time, increases the thermal efficiency and the efficiency of the precipitators. Low oxygen content is also necessary to achieve low NO{sub x}-emissions. The conclusion is that the demands on the accuracy of the measurement devices and methods are too high, if they are to be used inside the furnace to control the combustion process. The big problem is however to find representative locations to measure temperature, oxygen content and residence time in the furnace. Another major problem is that the monitoring of the operation conditions today do not secure a good combustion. It can lead to a false security. The reason is that it is very hard to find boilers without stratifications. These stratifications (stream lines) has each a different history of residence time, mixing time, oxygen and combustible gas levels and temperature, when they reach the convection area. The combustion result is the sum of all these different histories. The hydrocarbons emission is in general not produced at a steady level. Small clouds of unburnt hydrocarbons travels along the stream lines showing up as peaks on a THC measurement device. High amplitude peaks has a tendency to contain higher ratio of heavy hydrocarbons than lower peaks. The good correlation between some easily detected

  4. Global self-esteem and method effects: competing factor structures, longitudinal invariance, and response styles in adolescents.

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2014-06-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.

  5. Global self-esteem and method effects: competing factor structures, longitudinal invariance and response styles in adolescents

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2013-01-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931

  6. The Impact of Globalization and Technology on Teaching Business Communication: Reframing and Enlarging World View, Methods, and Content

    Berry, Priscilla

    2013-01-01

    This paper explores the current paradigm shift in the use of technology in the classroom, which is occurring because of technology explosion in society, impact of globalization, necessary reframing, and enlarging of the world view, methods, and content to make business communication classes relevant. The question is whether the classroom should…

  7. Optimisation structurelle des systemes energetiques

    Saloux, Etienne

    The development of renewable energies is growing over the last decade to face environmental issues due to the world fossil fuel consumption increase. These energies are highly involved in houses and commercial buildings and numerous systems have been proposed to meet their energy demand. Therefore, improving both efficiency and use of systems, i.e. improving energy management, appears essential to limit the ecological footprint of humanity on the planet. However, system integration yields a very complex problem to be solved due to the large number of units and theirs technology, size, working conditions and interconnections. This situation highlights the lack of systematic analysis for comparing integrated system performance and for correctly pointing out their potential. As a result, the objective of this thesis is to develop and to present such a method, in other words the structural optimization of energy systems. It will be helpful to choose the optimal equipment by identifying all the possibilities of system arrangements and for comparing their performance. Combinations have then been subjected to environmental (climate), structural (available area) and economical constrains while assessment criteria have considered both energy, economic and ecological aspects. For that reason, as well as energy and economic analyses, the exergy concept has also been applied to the equipment. Nevertheless, the high degree of complexity of integrated systems and the tedious numerical calculations make the resolution by using standard software very difficult. It is clear that the whole optimization project would be considerable and the aim is to develop models and optimization tools. First of all, an exhaustive review of energy equipment including photovoltaic panels, solar collectors, heat pumps and thermal energy storage systems, has been performed. Afterwards, energy and exergy models have been developed and tested for two specific energy scenarios: a) a solar assisted heat

  8. Understanding Factors that Shape Gender Attitudes in Early Adolescence Globally: A Mixed-Methods Systematic Review.

    Kågesten, Anna; Gibbs, Susannah; Blum, Robert Wm; Moreau, Caroline; Chandra-Mouli, Venkatraman; Herbert, Ann; Amin, Avni

    2016-01-01

    Early adolescence (ages 10-14) is a period of increased expectations for boys and girls to adhere to socially constructed and often stereotypical norms that perpetuate gender inequalities. The endorsement of such gender norms is closely linked to poor adolescent sexual and reproductive and other health-related outcomes yet little is known about the factors that influence young adolescents' personal gender attitudes. To explore factors that shape gender attitudes in early adolescence across different cultural settings globally. A mixed-methods systematic review was conducted of the peer-reviewed literature in 12 databases from 1984-2014. Four reviewers screened the titles and abstracts of articles and reviewed full text articles in duplicate. Data extraction and quality assessments were conducted using standardized templates by study design. Thematic analysis was used to synthesize quantitative and qualitative data organized by the social-ecological framework (individual, interpersonal and community/societal-level factors influencing gender attitudes). Eighty-two studies (46 quantitative, 31 qualitative, 5 mixed-methods) spanning 29 countries were included. Ninety percent of studies were from North America or Western Europe. The review findings indicate that young adolescents, across cultural settings, commonly express stereotypical or inequitable gender attitudes, and such attitudes appear to vary by individual sociodemographic characteristics (sex, race/ethnicity and immigration, social class, and age). Findings highlight that interpersonal influences (family and peers) are central influences on young adolescents' construction of gender attitudes, and these gender socialization processes differ for boys and girls. The role of community factors (e.g. media) is less clear though there is some evidence that schools may reinforce stereotypical gender attitudes among young adolescents. The findings from this review suggest that young adolescents in different cultural

  9. Understanding Factors that Shape Gender Attitudes in Early Adolescence Globally: A Mixed-Methods Systematic Review.

    Anna Kågesten

    Full Text Available Early adolescence (ages 10-14 is a period of increased expectations for boys and girls to adhere to socially constructed and often stereotypical norms that perpetuate gender inequalities. The endorsement of such gender norms is closely linked to poor adolescent sexual and reproductive and other health-related outcomes yet little is known about the factors that influence young adolescents' personal gender attitudes.To explore factors that shape gender attitudes in early adolescence across different cultural settings globally.A mixed-methods systematic review was conducted of the peer-reviewed literature in 12 databases from 1984-2014. Four reviewers screened the titles and abstracts of articles and reviewed full text articles in duplicate. Data extraction and quality assessments were conducted using standardized templates by study design. Thematic analysis was used to synthesize quantitative and qualitative data organized by the social-ecological framework (individual, interpersonal and community/societal-level factors influencing gender attitudes.Eighty-two studies (46 quantitative, 31 qualitative, 5 mixed-methods spanning 29 countries were included. Ninety percent of studies were from North America or Western Europe. The review findings indicate that young adolescents, across cultural settings, commonly express stereotypical or inequitable gender attitudes, and such attitudes appear to vary by individual sociodemographic characteristics (sex, race/ethnicity and immigration, social class, and age. Findings highlight that interpersonal influences (family and peers are central influences on young adolescents' construction of gender attitudes, and these gender socialization processes differ for boys and girls. The role of community factors (e.g. media is less clear though there is some evidence that schools may reinforce stereotypical gender attitudes among young adolescents.The findings from this review suggest that young adolescents in different

  10. A New Modified Three-Term Conjugate Gradient Method with Sufficient Descent Property and Its Global Convergence

    Bakhtawar Baluch

    2017-01-01

    Full Text Available A new modified three-term conjugate gradient (CG method is shown for solving the large scale optimization problems. The idea relates to the famous Polak-Ribière-Polyak (PRP formula. As the numerator of PRP plays a vital role in numerical result and not having the jamming issue, PRP method is not globally convergent. So, for the new three-term CG method, the idea is to use the PRP numerator and combine it with any good CG formula’s denominator that performs well. The new modification of three-term CG method possesses the sufficient descent condition independent of any line search. The novelty is that by using the Wolfe Powell line search the new modification possesses global convergence properties with convex and nonconvex functions. Numerical computation with the Wolfe Powell line search by using the standard test function of optimization shows the efficiency and robustness of the new modification.

  11. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    Taylor, A.; Blake, W.H.; Keith-Roach, M.J.

    2012-01-01

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were 7 Be geochemical behaviour is required to support tracer studies. ► Sequential extraction with natural 7 Be returns high analytical uncertainties. ► Preconcentrating extracts from a large sample mass improved analytical uncertainty. ► This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic 7 Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of 7 Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout 7 Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of 7 Be (t 1/2 = 53.3 days). Here, three different methods of preparing and quantifying 7 Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the 7 Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural 7 Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period (∼10% (2σ) where extract activity >40% of total activity) and generated statistically useful sequential extraction profiles. Total recoveries of 7 Be fell between 84 and 112%. The stable Be data demonstrated that the

  12. An Efficient Method for Mapping High-Resolution Global River Discharge Based on the Algorithms of Drainage Network Extraction

    Jiaye Li

    2018-04-01

    Full Text Available River discharge, which represents the accumulation of surface water flowing into rivers and ultimately into the ocean or other water bodies, may have great impacts on water quality and the living organisms in rivers. However, the global knowledge of river discharge is still poor and worth exploring. This study proposes an efficient method for mapping high-resolution global river discharge based on the algorithms of drainage network extraction. Using the existing global runoff map and digital elevation model (DEM data as inputs, this method consists of three steps. First, the pixels of the runoff map and the DEM data are resampled into the same resolution (i.e., 0.01-degree. Second, the flow direction of each pixel of the DEM data (identified by the optimal flow path method used in drainage network extraction is determined and then applied to the corresponding pixel of the runoff map. Third, the river discharge of each pixel of the runoff map is calculated by summing the runoffs of all the pixels in the upstream of this pixel, similar to the upslope area accumulation step in drainage network extraction. Finally, a 0.01-degree global map of the mean annual river discharge is obtained. Moreover, a 0.5-degree global map of the mean annual river discharge is produced to display the results with a more intuitive perception. Compared against the existing global river discharge databases, the 0.01-degree map is of a generally high accuracy for the selected river basins, especially for the Amazon River basin with the lowest relative error (RE of 0.3% and the Yangtze River basin within the RE range of ±6.0%. However, it is noted that the results of the Congo and Zambezi River basins are not satisfactory, with RE values over 90%, and it is inferred that there may be some accuracy problems with the runoff map in these river basins.

  13. Solar-like oscillations in red giants observed with Kepler: comparison of global oscillation parameters from different methods

    Hekker, Saskia; Elsworth, Yvonne; De Ridder, Joris

    2011-01-01

    investigate the differences in results for global oscillation parameters of G and K red-giant stars due to different methods and definitions. We also investigate uncertainties originating from the stochastic nature of the oscillations. Methods: For this investigation we use Kepler data obtained during...... obtain results for the frequency of maximum oscillation power (ν_max) and the mean large separation () from different methods for over one thousand red-giant stars. The results for these parameters agree within a few percent and seem therefore robust to the different analysis methods and definitions...

  14. Optimising agile development practices for the maintenance operation: nine heuristics

    Heeager, Lise Tordrup; Rose, Jeremy

    2014-01-01

    Agile methods are widely used and successful in many development situations and beginning to attract attention amongst the software maintenance community – both researchers and practitioners. However, it should not be assumed that implementing a well-known agile method for a maintenance department...... is therefore a trivial endeavour - the maintenance operation differs in some important respects from development work. Classical accounts of software maintenance emphasise more traditional software engineering processes, whereas recent research accounts of agile maintenance efforts uncritically focus...... on benefits. In an action research project at Aveva in Denmark we assisted with the optimisation of SCRUM, tailoring the standard process to the immediate needs of the developers. We draw on both theoretical and empirical learning to formulate nine heuristics for maintenance practitioners wishing to go agile....

  15. Optimisation of amplitude distribution of magnetic Barkhausen noise

    Pal'a, Jozef; Jančárik, Vladimír

    2017-09-01

    The magnetic Barkhausen noise (MBN) measurement method is a widely used non-destructive evaluation technique used for inspection of ferromagnetic materials. Besides other influences, the excitation yoke lift-off is a significant issue of this method deteriorating the measurement accuracy. In this paper, the lift-off effect is analysed mainly on grain oriented Fe-3%Si steel subjected to various heat treatment conditions. Based on investigation of relationship between the amplitude distribution of MBN and lift-off, an approach to suppress the lift-off effect is proposed. Proposed approach utilizes the digital feedback optimising the measurement based on the amplitude distribution of MBN. The results demonstrated that the approach can highly suppress the lift-off effect up to 2 mm.

  16. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  17. Optimisation of Protection as applicable to geological disposal: the ICRP view

    Weiss, W.

    2010-01-01

    Wolfgang Weiss (BfS), vice-chair of ICRP Committee 4, recalled that the role of optimisation is to select the best protection options under the prevailing circumstances based on scientific considerations, societal concerns and ethical aspects as well as considerations of transparency. An important role of the concept of optimisation of protection is to foster a 'safety culture' and thereby to engender a state of thinking in everyone responsible for control of radiation exposures, such that they are continuously asking themselves the question, 'Have I done all that I reasonably can to avoid or reduce these doses?' Clearly, the answer to this question is a matter of judgement and necessitates co-operation between all parties involved and, as a minimum, the operating management and the regulatory agencies, but the dialogue would be more complete if other stakeholders were also involved. What kinds of checks and balances or factors would be needed to be considered for an 'optimal' system? Can indicators be identified? Quantitative methods may provide input to this dialogue but they should never be the sole input. The ICRP considers that the parameters to take into account include also social considerations and values, environmental considerations, as well as technical and economic considerations. Wolfgang Weiss approached the question of the distinction to be made between system optimisation (in the sense of taking account of social and economic as well as of all types of hazards) and optimisation of radiological protection. The position of the ICRP is that the system of protection that it proposes is based on both science (quantification of the health risk) and value judgement (what is an acceptable risk?) and optimisation is the recommended process to integrate both aspects. Indeed, there has been evolution since the old system of intervention levels to the new system, whereby, even if the level of the dose or risk (which is called constraint in ICRP-81 ) is met

  18. Flotation process control optimisation at Prominent Hill

    Lombardi, Josephine; Muhamad, Nur; Weidenbach, M.

    2012-01-01

    OZ Minerals' Prominent Hill copper- gold concentrator is located 130 km south east of the town of Coober Pedy in the Gawler Craton of South Australia. The concentrator was built in 2008 and commenced commercial production in early 2009. The Prominent Hill concentrator is comprised of a conventional grinding and flotation processing plant with a 9.6 Mtpa ore throughput capacity. The flotation circuit includes six rougher cells, an IseMill for regrinding the rougher concentrate and a Jameson cell heading up the three stage conventional cell cleaner circuit. In total there are four level controllers in the rougher train and ten level controllers in the cleaning circuit for 18 cells. Generic proportional — integral and derivative (PID) control used on the level controllers alone propagated any disturbances downstream in the circuit that were generated from the grinding circuit, hoppers, between cells and interconnected banks of cells, having a negative impact on plant performance. To better control such disturbances, FloatStar level stabiliser was selected for installation on the flotation circuit to account for the interaction between the cells. Multivariable control was also installed on the five concentrate hoppers to maintain consistent feed to the cells and to the IsaMill. An additional area identified for optimisation in the flotation circuit was the mass pull rate from the rougher cells. FloatStar flow optimiser was selected to be installed subsequent to the FloatStar level stabiliser. This allowed for a unified, consistent and optimal approach to running the rougher circuit. This paper describes the improvement in the stabilisation of the circuit achieved by the FloatStar level stabiliser by using the interaction matrix between cell level controllers and the results and benefits of implementing the FloatStar flow optimiser on the rougher train.

  19. A Comparative Study on Recently-Introduced Nature-Based Global Optimization Methods in Complex Mechanical System Design

    Abdulbaset El Hadi Saad

    2017-10-01

    Full Text Available Advanced global optimization algorithms have been continuously introduced and improved to solve various complex design optimization problems for which the objective and constraint functions can only be evaluated through computation intensive numerical analyses or simulations with a large number of design variables. The often implicit, multimodal, and ill-shaped objective and constraint functions in high-dimensional and “black-box” forms demand the search to be carried out using low number of function evaluations with high search efficiency and good robustness. This work investigates the performance of six recently introduced, nature-inspired global optimization methods: Artificial Bee Colony (ABC, Firefly Algorithm (FFA, Cuckoo Search (CS, Bat Algorithm (BA, Flower Pollination Algorithm (FPA and Grey Wolf Optimizer (GWO. These approaches are compared in terms of search efficiency and robustness in solving a set of representative benchmark problems in smooth-unimodal, non-smooth unimodal, smooth multimodal, and non-smooth multimodal function forms. In addition, four classic engineering optimization examples and a real-life complex mechanical system design optimization problem, floating offshore wind turbines design optimization, are used as additional test cases representing computationally-expensive black-box global optimization problems. Results from this comparative study show that the ability of these global optimization methods to obtain a good solution diminishes as the dimension of the problem, or number of design variables increases. Although none of these methods is universally capable, the study finds that GWO and ABC are more efficient on average than the other four in obtaining high quality solutions efficiently and consistently, solving 86% and 80% of the tested benchmark problems, respectively. The research contributes to future improvements of global optimization methods.

  20. Geometric methods of global attraction in systems of delay differential equations

    El-Morshedy, Hassan A.; Ruiz-Herrera, Alfonso

    2017-11-01

    In this paper we deduce criteria of global attraction in systems of delay differential equations. Our methodology is new and consists in "dominating" the nonlinear terms of the system by a scalar function and then studying some dynamical properties of that function. One of the crucial benefits of our approach is that we obtain delay-dependent results of global attraction that cover the best delay-independent conditions. We apply our results in a gene regulatory model and the classical Nicholson's blowfly equation with patch structure.

  1. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  2. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    Kian Sheng Lim

    2013-01-01

    Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  3. Advanced manufacturing: optimising the factories of tomorrow

    Philippon, Patrick

    2013-01-01

    Faced with competition Patrick Philippon - Les Defis du CEA no.179 - April 2013 from the emerging countries, the competitiveness of the industrialised nations depends on the ability of their industries to innovate. This strategy necessarily entails the reorganisation and optimisation of the production systems. This is the whole challenge for 'advanced manufacturing', which relies on the new information and communication technologies. Interactive robotics, virtual reality and non-destructive testing are all technological building blocks developed by CEA, now approved within a cross-cutting programme, to meet the needs of industry and together build the factories of tomorrow. (author)

  4. Biorefinery plant design, engineering and process optimisation

    Holm-Nielsen, Jens Bo; Ehimen, Ehiazesebhor Augustine

    2014-01-01

    Before new biorefinery systems can be implemented, or the modification of existing single product biomass processing units into biorefineries can be carried out, proper planning of the intended biorefinery scheme must be performed initially. This chapter outlines design and synthesis approaches...... applicable for the planning and upgrading of intended biorefinery systems, and includes discussions on the operation of an existing lignocellulosic-based biorefinery platform. Furthermore, technical considerations and tools (i.e., process analytical tools) which could be applied to optimise the operations...... of existing and potential biorefinery plants are elucidated....

  5. Specification, Verification and Optimisation of Business Processes

    Herbert, Luke Thomas

    is extended with stochastic branching, message passing and reward annotations which allow for the modelling of resources consumed during the execution of a business process. Further, it is shown how this structure can be used to formalise the established business process modelling language Business Process...... fault tree analysis and the automated optimisation of business processes by means of an evolutionary algorithm. This work is motivated by problems that stem from the healthcare sector, and examples encountered in this field are used to illustrate these developments....

  6. Topology Optimisation for Coupled Convection Problems

    Alexandersen, Joe; Andreasen, Casper Schousboe; Aage, Niels

    stabilised finite elements implemented in a parallel multiphysics analysis and optimisation framework DFEM [1], developed and maintained in house. Focus is put on control of the temperature field within the solid structure and the problems can therefore be seen as conjugate heat transfer problems, where heat...... conduction governs in the solid parts of the design domain and couples to convection-dominated heat transfer to a surrounding fluid. Both loosely coupled and tightly coupled problems are considered. The loosely coupled problems are convection-diffusion problems, based on an advective velocity field from...

  7. Cost optimisation studies of high power accelerators

    McAdams, R.; Nightingale, M.P.S.; Godden, D. [AEA Technology, Oxon (United Kingdom)] [and others

    1995-10-01

    Cost optimisation studies are carried out for an accelerator based neutron source consisting of a series of linear accelerators. The characteristics of the lowest cost design for a given beam current and energy machine such as power and length are found to depend on the lifetime envisaged for it. For a fixed neutron yield it is preferable to have a low current, high energy machine. The benefits of superconducting technology are also investigated. A Separated Orbit Cyclotron (SOC) has the potential to reduce capital and operating costs and intial estimates for the transverse and longitudinal current limits of such machines are made.

  8. Automation of route identification and optimisation based on data-mining and chemical intuition.

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  9. Biofuels carbon footprints: Whole-systems optimisation for GHG emissions reduction.

    Zamboni, Andrea; Murphy, Richard J; Woods, Jeremy; Bezzo, Fabrizio; Shah, Nilay

    2011-08-01

    A modelling approach for strategic design of ethanol production systems combining lifecycle analysis (LCA) and supply chain optimisation (SCO) can significantly contribute to assess their economic and environmental sustainability and to guide decision makers towards a more conscious implementation of ad hoc farming and processing practices. Most models applications so far have been descriptive in nature; the model proposed in this work is "normative" in that it aims to guide actions towards optimal outcomes (e.g. optimising the nitrogen balance through the whole supply chain). The modelling framework was conceived to steer strategic policies through a geographically specific design process considering economic and environmental criteria. Results shows how a crop management strategy devised from a whole systems perspective can significantly contribute to mitigate global warming even in first generation technologies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Fast and fuzzy multi-objective radiotherapy treatment plan generation for head and neck cancer patients with the lexicographic reference point method (LRPM)

    van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan

    2017-06-01

    Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.

  11. Conference on "State of the Art in Global Optimization : Computational Methods and Applications"

    Pardalos, P

    1996-01-01

    Optimization problems abound in most fields of science, engineering, and technology. In many of these problems it is necessary to compute the global optimum (or a good approximation) of a multivariable function. The variables that define the function to be optimized can be continuous and/or discrete and, in addition, many times satisfy certain constraints. Global optimization problems belong to the complexity class of NP-hard prob­ lems. Such problems are very difficult to solve. Traditional descent optimization algorithms based on local information are not adequate for solving these problems. In most cases of practical interest the number of local optima increases, on the aver­ age, exponentially with the size of the problem (number of variables). Furthermore, most of the traditional approaches fail to escape from a local optimum in order to continue the search for the global solution. Global optimization has received a lot of attention in the past ten years, due to the success of new algorithms for solvin...

  12. A global method for calculating plant CSR ecological strategies applied across biomes world-wide

    Pierce, S.; Negreiros, D.; Cerabolini, B.E.L.; Kattge, J.; Díaz, S.; Kleyer, M.; Shipley, B.; Wright, S.J.; Soudzilovskaia, N.A.; Onipchenko, V.G.; van Bodegom, P.M.; Frenette-Dussault, C.; Weiher, E.; Pinho, B.X.; Cornelissen, J.H.C.; Grime, J.P.; Thompson, K.; Hunt, R.; Wilson, P.J.; Buffa, G.; Nyakunga, O.C.; Reich, P.B.; Caccianiga, M.; Mangili, F.; Ceriani, R.M.; Luzzaro, A.; Brusa, G.; Siefert, A.; Barbosa, N.P.U.; Chapin III, F.S.; Cornwell, W.K.; Fang, Jingyun; Wilson Fernandez, G.; Garnier, E.; Le Stradic, S.; Peñuelas, J.; Melo, F.P.L.; Slaviero, A.; Tabarrelli, M.; Tampucci, D.

    2017-01-01

    Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA)

  13. National Competitiveness in Global Economy: Evolution of Approaches and Methods of Assessment

    - Teng Delux

    2011-09-01

    Full Text Available The basic concept of national competitiveness and the analytic potential model for the estimation of countries' competitiveness, such as diamond model of competitive advantages of national economies by M. Porter, the generalized double diamond model of international competitiveness by C. Moon, 9-factors model by S. Cho, Global Competitiveness Index (GCI and Knowledge Economy Index (KEI are considered.

  14. Principles and methods of using wild animals in bioindication of global radioactive contamination

    Sokolov, V.E.; Krivolutskij, D.A.; Fedorov, E.A.; Pokarzhevskij, A.D.; Ryabtsev, I.A.; Usachev, V.L.

    1986-01-01

    Wild animals (mammals), birds, reptiles, amphibians, land and soil invertebrates) in natural ecosystems accumulate the easily detected radionuclide quantities. The technique for estimation of radionuclide content in the organism of animals are considered. It is suggested to use wild animals for studying biogenic migration of radionuclides by food chains in ecosystems in global monitoring of medium contamination, particularly on biosphers preserves

  15. Structural-electrical coupling optimisation for radiating and scattering performances of active phased array antenna

    Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng

    2018-04-01

    It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.

  16. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  17. Global analysis of cloud field coverage and radiative properties, using morphological methods and MODIS observations

    R. Z. Bar-Or

    2011-01-01

    Full Text Available The recently recognized continuous transition zone between detectable clouds and cloud-free atmosphere ("the twilight zone" is affected by undetectable clouds and humidified aerosol. In this study, we suggest to distinguish cloud fields (including the detectable clouds and the surrounding twilight zone from cloud-free areas, which are not affected by clouds. For this classification, a robust and simple-to-implement cloud field masking algorithm which uses only the spatial distribution of clouds, is presented in detail. A global analysis, estimating Earth's cloud field coverage (50° S–50° N for 28 July 2008, using the Moderate Resolution Imaging Spectroradiometer (MODIS data, finds that while the declared cloud fraction is 51%, the global cloud field coverage reaches 88%. The results reveal the low likelihood for finding a cloud-free pixel and suggest that this likelihood may decrease as the pixel size becomes larger. A global latitudinal analysis of cloud fields finds that unlike oceans, which are more uniformly covered by cloud fields, land areas located under the subsidence zones of the Hadley cell (the desert belts, contain proper areas for investigating cloud-free atmosphere as there is 40–80% probability to detect clear sky over them. Usually these golden-pixels, with higher likelihood to be free of clouds, are over deserts. Independent global statistical analysis, using MODIS aerosol and cloud products, reveals a sharp exponential decay of the global mean aerosol optical depth (AOD as a function of the distance from the nearest detectable cloud, both above ocean and land. Similar statistical analysis finds an exponential growth of mean aerosol fine-mode fraction (FMF over oceans when the distance from the nearest cloud increases. A 30 km scale break clearly appears in several analyses here, suggesting this is a typical natural scale of cloud fields. This work shows different microphysical and optical properties of cloud fields

  18. Achieving a Sustainable Urban Form through Land Use Optimisation: Insights from Bekasi City’s Land-Use Plan (2010–2030

    Rahmadya Trias Handayanto

    2017-02-01

    Full Text Available Cities worldwide have been trying to achieve a sustainable urban form to handle their rapid urban growth. Many sustainable urban forms have been studied and two of them, the compact city and the eco city, were chosen in this study as urban form foundations. Based on these forms, four sustainable city criteria (compactness, compatibility, dependency, and suitability were considered as necessary functions for land use optimisation. This study presents a land use optimisation as a method for achieving a sustainable urban form. Three optimisation methods (particle swarm optimisation, genetic algorithms, and a local search method were combined into a single hybrid optimisation method for land use in Bekasi city, Indonesia. It was also used for examining Bekasi city’s land-use-plan (2010–2030 after optimising current (2015 and future land use (2030. After current land use optimisation, the score of sustainable city criteria increased significantly. Three important centres of land use (commercial, industrial, and residential were also created through clustering the results. These centres were slightly different from centres of the city plan zones. Additional land uses in 2030 were predicted using a nonlinear autoregressive neural network with external input. Three scenarios were used for allocating these additional land uses including sustainable development, government policy, and business-as-usual. Future land use allocation in 2030 found that the sustainable development scenario showed better performance compared to government policy and business-as-usual scenarios.

  19. Optimisation of milling parameters using neural network

    Lipski Jerzy

    2017-01-01

    Full Text Available The purpose of this study was to design and test an intelligent computer software developed with the purpose of increasing average productivity of milling not compromising the design features of the final product. The developed system generates optimal milling parameters based on the extent of tool wear. The introduced optimisation algorithm employs a multilayer model of a milling process developed in the artificial neural network. The input parameters for model training are the following: cutting speed vc, feed per tooth fz and the degree of tool wear measured by means of localised flank wear (VB3. The output parameter is the surface roughness of a machined surface Ra. Since the model in the neural network exhibits good approximation of functional relationships, it was applied to determine optimal milling parameters in changeable tool wear conditions (VB3 and stabilisation of surface roughness parameter Ra. Our solution enables constant control over surface roughness parameters and productivity of milling process after each assessment of tool condition. The recommended parameters, i.e. those which applied in milling ensure desired surface roughness and maximal productivity, are selected from all the parameters generated by the model. The developed software may constitute an expert system supporting a milling machine operator. In addition, the application may be installed on a mobile device (smartphone, connected to a tool wear diagnostics instrument and the machine tool controller in order to supply updated optimal parameters of milling. The presented solution facilitates tool life optimisation and decreasing tool change costs, particularly during prolonged operation.

  20. Noise aspects at aerodynamic blade optimisation projects

    Schepers, J.G. [Netherlands Energy Research Foundation, Petten (Netherlands)

    1997-12-31

    This paper shows an example of an aerodynamic blade optimisation, using the program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. The aerodynamic optimised geometry from PVOPT is the `real` optimum (up to the latest decimal). The most important conclusion from this study is, that it is worthwhile to investigate the behaviour of the objective function (in the present case the energy yield) around the optimum: If the optimum is flat, there is a possibility to apply modifications to the optimum configuration with only a limited loss in energy yield. It is obvious that the modified configurations emits a different (and possibly lower) noise level. In the BLADOPT program (the successor of PVOPT) it will be possible to quantify the noise level and hence to assess the reduced noise emission more thoroughly. At present the most promising approaches for noise reduction are believed to be a reduction of the rotor speed (if at all possible), and a reduction of the tip angle by means of low lift profiles, or decreased twist at the outboard stations. These modifications were possible without a significant loss in energy yield. (LN)