WorldWideScience

Sample records for optimise uptake parameters

  1. Optimising root system hydraulic architectures for water uptake

    Science.gov (United States)

    Meunier, Félicien; Couvreur, Valentin; Draye, Xavier; Javaux, Mathieu

    2015-04-01

    In this study we started from local hydraulic analysis of idealized root systems to develop a mathematical framework necessary for the understanding of global root systems behaviors. The underlying assumption of this study was that the plant is naturally optimised for the water uptake. The root system is thus a pipe network dedicated to the capture and transport of water. The main objective of the present research is to explain the fitness of major types of root architectures to their environment. In a first step, we developed links between local hydraulic properties and macroscopic parameters of (un)branched roots. The outcome of such an approach were functions of apparent conductance of entire root system and uptake distribution along the roots. We compared our development with some allometric scaling laws for the root water uptake: under the same simplifying assumptions we were able to obtain the same results and even to expand them to more physiological cases. Using empirical data of measured root conductance, we were also able to fit extremely well the data-set with this model. In a second stage we used generic architecture parameters and an existent root growth model to generate various types of root systems (from fibrous to tap). We combined both sides (hydraulic and architecture) then to maximize under a volume constraint either apparent conductance of root systems or the soil volume explored by active roots during the plant growth period. This approach has led to the sensitive parameters of the macroscopic parameters (conductance and location of the water uptake) of each single plant selected for this study. Scientific questions such as: "What is the optimal sowing density of a given hydraulic architecture ?" or "Which plant traits can we change to better explore the soil domain ?" can be also addressed with this approach: some potential applications are illustrated. The next (and ultimate phase) will be to validate our conclusions with real architectures

  2. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method...... that will optimise parameters based on the behaviour of the elastic models over time....

  3. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    Science.gov (United States)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  4. Parameter Screening and Optimisation for ILP Using Designed Experiments

    Science.gov (United States)

    Srinivasan, Ashwin; Ramakrishnan, Ganesh

    Reports of experiments conducted with an Inductive Logic Programming system rarely describe how specific values of parameters of the system are arrived at when constructing models. Usually, no attempt is made to identify sensitive parameters, and those that are used are often given "factory-supplied" default values, or values obtained from some non-systematic exploratory analysis. The immediate consequence of this is, of course, that it is not clear if better models could have been obtained if some form of parameter selection and optimisation had been performed. Questions follow inevitably on the experiments themselves: specifically, are all algorithms being treated fairly, and is the exploratory phase sufficiently well-defined to allow the experiments to be replicated? In this paper, we investigate the use of parameter selection and optimisation techniques grouped under the study of experimental design. Screening and "response surface" methods determine, in turn, sensitive parameters and good values for these parameters. This combined use of parameter selection and response surface-driven optimisation has a long history of application in industrial engineering, and its role in ILP is investigated using two well-known benchmarks. The results suggest that computational overheads from this preliminary phase are not substantial, and that much can be gained, both on improving system performance and on enabling controlled experimentation, by adopting well-established procedures such as the ones proposed here.

  5. Fault diagnosis based on support vector machines with parameter optimisation by artificial immunisation algorithm

    Science.gov (United States)

    Yuan, Shengfa; Chu, Fulei

    2007-04-01

    Support vector machines (SVM) is a new general machine-learning tool based on the structural risk minimisation principle that exhibits good generalisation when fault samples are few, it is especially fit for classification, forecasting and estimation in small-sample cases such as fault diagnosis, but some parameters in SVM are selected by man's experience, this has hampered its efficiency in practical application. Artificial immunisation algorithm (AIA) is used to optimise the parameters in SVM in this paper. The AIA is a new optimisation method based on the biologic immune principle of human being and other living beings. It can effectively avoid the premature convergence and guarantees the variety of solution. With the parameters optimised by AIA, the total capability of the SVM classifier is improved. The fault diagnosis of turbo pump rotor shows that the SVM optimised by AIA can give higher recognition accuracy than the normal SVM.

  6. Pre-segmented 2-Step IMRT with subsequent direct machine parameter optimisation – a planning study

    Directory of Open Access Journals (Sweden)

    Flentje Michael

    2008-11-01

    Full Text Available Abstract Background Modern intensity modulated radiotherapy (IMRT mostly uses iterative optimisation methods. The integration of machine parameters into the optimisation process of step and shoot leaf positions has been shown to be successful. For IMRT segmentation algorithms based on the analysis of the geometrical structure of the planning target volumes (PTV and the organs at risk (OAR, the potential of such procedures has not yet been fully explored. In this work, 2-Step IMRT was combined with subsequent direct machine parameter optimisation (DMPO-Raysearch Laboratories, Sweden to investigate this potential. Methods In a planning study DMPO on a commercial planning system was compared with manual primary 2-Step IMRT segment generation followed by DMPO optimisation. 15 clinical cases and the ESTRO Quasimodo phantom were employed. Both the same number of optimisation steps and the same set of objective values were used. The plans were compared with a clinical DMPO reference plan and a traditional IMRT plan based on fluence optimisation and consequent segmentation. The composite objective value (the weighted sum of quadratic deviations of the objective values and the related points in the dose volume histogram was used as a measure for the plan quality. Additionally, a more extended set of parameters was used for the breast cases to compare the plans. Results The plans with segments pre-defined with 2-Step IMRT were slightly superior to DMPO alone in the majority of cases. The composite objective value tended to be even lower for a smaller number of segments. The total number of monitor units was slightly higher than for the DMPO-plans. Traditional IMRT fluence optimisation with subsequent segmentation could not compete. Conclusion 2-Step IMRT segmentation is suitable as starting point for further DMPO optimisation and, in general, results in less complex plans which are equal or superior to plans generated by DMPO alone.

  7. Multi-parameter building thermal analysis using the lattice method for global optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Saporito, A. [Fire and Environmental Modelling Centre, Building Research Establishment, Watford (United Kingdom); Day, A.R.; Karayiannis, T.G. [School of Engineering Systems and Design, South Bank University, London (United Kingdom); Parand, F. [Centre for Construction IT, Building Research Establishment, Watford (United Kingdom)

    2000-07-01

    The energy performance in buildings is a complex function of the building form and structure, heating system, occupancy pattern, operating schedules, and external climatic conditions. Computer simulations can help understand the dynamic interactions of these parameters. However, to carry out a multi-parameter analysis for the optimisation of the building energy performance, it is necessary to reduce the large number of tests resulting from all possible parameter combinations. In this paper, the lattice method for global optimisation (LMGO) for reducing the number of tests was used. A multi-parameter study was performed to investigate the heating energy use in office buildings using the thermal simulation code APACHE (IES-FACET). From the results of the sensitivity analysis it was possible to estimate the relative importance of various energy saving features. (author)

  8. Probabilistic Constraint Programming for Parameters Optimisation of Generative Models

    CERN Document Server

    Zanin, Massimiliano; Sousa, Pedro A C; Cruz, Jorge

    2015-01-01

    Complex networks theory has commonly been used for modelling and understanding the interactions taking place between the elements composing complex systems. More recently, the use of generative models has gained momentum, as they allow identifying which forces and mechanisms are responsible for the appearance of given structural properties. In spite of this interest, several problems remain open, one of the most important being the design of robust mechanisms for finding the optimal parameters of a generative model, given a set of real networks. In this contribution, we address this problem by means of Probabilistic Constraint Programming. By using as an example the reconstruction of networks representing brain dynamics, we show how this approach is superior to other solutions, in that it allows a better characterisation of the parameters space, while requiring a significantly lower computational cost.

  9. Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.

    Science.gov (United States)

    Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip

    2015-11-01

    The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.

  10. Optimisation of process parameters in friction stir welding based on residual stress analysis: a feasibility study

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Hattel, Jesper Henri

    2010-01-01

    The present paper considers the optimisation of process parameters in friction stir welding (FSW). More specifically, the choices of rotational speed and traverse welding speed have been investigated using genetic algorithms. The welding process is simulated in a transient, two-dimensional sequen......The present paper considers the optimisation of process parameters in friction stir welding (FSW). More specifically, the choices of rotational speed and traverse welding speed have been investigated using genetic algorithms. The welding process is simulated in a transient, two......, and this is presented as a Pareto optimal front. Moreover, a higher welding speed for a fixed rotational speed results, in general, in slightly higher stress levels in the tension zone, whereas a higher rotational speed for a fixed welding speed yields somewhat lower peak residual stress, however, a wider tension zone...

  11. Optimisation of algorithm control parameters in cultural differential evolution applied to molecular crystallography

    Institute of Scientific and Technical Information of China (English)

    Maryjane TREMAYNE; Samantha Y. CHONG; Duncan BELL

    2009-01-01

    Evolutionary search and optimisation algorithms have been used successfully in many areas of materials science and chemistry. In recent years, these techniques have been applied to, and revolutionised the study of crystal structures from powder diffraction data. In this paper we present the application of a hybrid global optimisation technique,cultural differential evolution (CDE), to crystal structure determination from powder diffraction data. The combination of the principles of social evolution and biological evolution,through the pruning of the parameter search space shows significant improvement in the efficiency of the calculations over traditional dictates of biological evolution alone. Resuits are presented in which a range of algorithm control parameters, i.e., population size, mutation and recombination rates, extent of culture-based pruning are used to assess the performance of this hybrid technique. The effects of these control parameters on the speed and efficiency of the optimisation calculations are discussed, and the potential advantages of the CDE approach demonstrated through an average 40% improvement in terms of speed of convergence of the calculations presented, and a maximum gain of 68% with larger population size.

  12. Land surface parameter optimisation through data assimilation: the adJULES system

    Science.gov (United States)

    Raoult, Nina; Jupp, Tim; Cox, Peter

    2017-04-01

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. We present adJULES in a data assimilation framework and demonstrate its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85% of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.

  13. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    Science.gov (United States)

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  14. Exploring the triplet parameters space to optimise the final focus of the FCC-hh

    CERN Document Server

    Van Riesen-Haupt, Leon; Seryi, Andrei; Cruz Alaniz, Emilia

    2017-01-01

    One of the main challenges when designing final focus systems of particle accelerators is maximising the beam stay clear in the strong quadrupole magnets of the inner triplet. Moreover it is desirable to keep the quadrupoles in the triplet as short as possible for space and costs reasons but also to reduce chromaticity and simplify corrections schemes. An algorithm that explores the triplet parameter space to optimise both these aspects was written. It uses thin lenses as a first approximation and MADX for more precise calculations. In cooperation with radiation studies, this algorithm was then applied to design an alternative triplet for the final focus of the Future Circular Collider (FCC-hh).

  15. Optimisation of manufacturing parameters for a Ni-Ag fuel cell electrode

    Energy Technology Data Exchange (ETDEWEB)

    Pishbin, M.H. [School of Metallurgy and Materials Engineering, Faculty of Engineering, University of Tehran, Tehran (Iran); Mohammadi, A.R. [Department of Mechanical Engineering, University of British Columbia, Vancouver (Canada); Nasri, M. [Information Technology Company of Iran, Tehran (Iran)

    2007-08-15

    The aim of this research is to optimise manufacturing parameters for a fuel cell electrode. The combination of nickel oxide, silver oxide and ammonium bicarbonate powders is used to produce the electrode. The main role of silver element is to increase the activity in the electrode. Ni-Ag electrode can be used in fuel cells as positive and negative electrodes. All powders are mixed in the benzene solution by a magnetic mixer and then compressed to form green electrode. The range of pressure in this step is between 40 and 160 MPa. The green electrode is sintered in hydrogen atmosphere through a tube furnace and then cooled to 200 C under argon atmosphere. The range of sintering temperature and time is 500-800 C and 10-60 min, respectively. Also, silver oxide and ammonium bicarbonate percentages are varied from 20 to 65 and 15 to 35%, respectively. All parameters including composition, pressure, sintering temperature and time are changed during electrode fabrication to achieve optimised properties in the electrode. So, it is necessary to perform several tests measuring porosity, surface area, density, weight loss, mechanical strength, shrinkage, exchange current density and metallographic photos. The optimum conditions of the electrode production resulting from this investigation include compacting pressure 60 MPa, sintering temperature 560 C, sintering time 15 min, silver oxide percentage 50% and ammonium bicarbonate percentage 27%. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  16. OPTIMISATION OF PROCESS PARAMETER CONDITIONS FOR BIODIESEL PRODUCTION BY REACTIVE EXTRACTION OF JATROPHA SEEDS

    Directory of Open Access Journals (Sweden)

    MUHAMMAD DANI SUPARDAN

    2017-03-01

    Full Text Available Biodiesel can be produced by reactive extraction of jatropha seeds to reduce the cost and processing time associated with conventional methods. In this study, the relationship between various parameters of reactive extraction of jatropha seeds is investigated. The effect of processing time, the moisture content of jatropha seeds and hexane to oil weight ratios are examined to determine the best performance for biodiesel yield. Response surface methodology was used to statistically evaluate and optimise the process parameter conditions. It was found that the biodiesel production achieved an optimum biodiesel yield of 73.7% under the following conditions: processing time of 160 min, moisture content of jatropha seeds of 1% and hexane to oil weight ratio of 7.2.

  17. Optimisation of design parameters for collimators and pin-holes of bolometer cameras

    Energy Technology Data Exchange (ETDEWEB)

    Meister, H. [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, D-85748 Garching (Germany); Kalvin, S. [Wigner Research Centre for Physics, Hungarian Academy of Sciences, Konkoly-Thege Miklós 29–33, H-1121 Budapest (Hungary)

    2014-12-15

    The total radiation emission profile of fusion experiments is usually determined using the bolometer diagnostic. In order to evaluate the spatially resolved profile, many line integrated measurements are inverted using tomographic reconstruction techniques. Their success depends on a well known and optimised definition of the viewing cones of every line-of-sight. To this aim a set of equations has been derived and put in hierarchical order to define the design parameters for bolometer cameras in fusion experiments. In particular, previous considerations, which focussed on the beam width overlap and light yield optimisation, are extended to explicitly take geometrical boundary conditions imposed by the experimental device into account, with an emphasis on small gap sizes through which viewing cones have to pass through. The equations are derived for both camera types, collimator and pin-hole versions. The results obtained can be used to design bolometer cameras for any fusion device, but in particular also for ITER. An example of such an application is given and implications for the realisation of the optimal design are discussed.

  18. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    Science.gov (United States)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  19. Study of Rotor Spun Basofil/Cotton Blended Yarn Quality Characteristics during Optimisation of Processing Parameters

    Institute of Scientific and Technical Information of China (English)

    Mwasiagi J.I.; WANG Xin-hou; Tuigong D.R.; Wang J.

    2005-01-01

    Yarn quality characteristics are affected by processing parameters. A 36 tex rotor spun yarn of 50/50 Basofil/cotton (B/C) blended yarn was spun, and the spinning process optimised for rotor speed, opening roller speed and twist factor. Selected yarn characteristics were studied during the optimization process. During the optimizations process yarn elongation and hairiness reduced with increase in rotor speed. Tenacity increased with increase of rotor speed. The increase in TF caused tenacity and CV of count to increase up to a peak and then started to decrease with further increase of TF. While TF caused an increase in yarn hairiness, elongation decreased to a minimum level and then started to increase with further increase of TF. CV of count and hairiness increased with increase in opening roller speed, but tenacity and elongation decreased with increase in opening roller speed. The optimization process yielded the optimum levels for rotor speed, opening roller speed and twist factor (TF) as 45,000 rpm, 6,500 rpm and 450respectively. As per uster Standards the optimum yarn showed good results for CV of count, CV of tenacity and thin places/km.

  20. Constrained optimisation of the parameters for a simple isostatic Moho model

    Science.gov (United States)

    Lane, R. J.

    2010-12-01

    In a regional-scale integrated 3D crustal mapping project for the offshore Capel-Faust region, approximately 800 km east of the Australian east coast, gravity data were being used by the Geoscience Australia Remote Eastern Frontiers team to evaluate the viability of an interpretation of the upper crustal sequence that had been derived from a network of 2D seismic lines. A preliminary gravity forward modelling calculation for this sequence using mass density values derived from limited well log and seismic velocity information indicated a long wavelength misfit between this response and the observed data. Rather than draw upon a mathematical function to account for this component of the model response (e.g., low order polynomial), a solution that would lack geological significance, I chose to first investigate whether the gravity response stemming from the density contrast across the crust-mantle boundary (i.e., the Moho) could account for this misfit. The available direct observations to build the Moho surface in the 3D geological map were extremely sparse, however. The 2D seismic data failed to provide any information on the Moho. The only constraints on the depth to this interface within the project area were from 2 seismic refraction soundings. These soundings were in the middle of a set of 11 soundings forming a profile across the Lord Howe Rise. The use of relatively high resolution bathymetry data coupled with an Airy-Heiskanen isostatic model assumption was investigated as a means of defining the form of the Moho surface. The suitability of this isostatic assumption and associated simple model were investigated through optimisation of the model parameters. The Moho depths interpreted from the seismic refraction profile were used as the observations in this exercise. The output parameters were the average depth to the Moho (Tavg), upper crust density (RHOzero), and density contrast across the lower crust and upper mantle (RHOone). The model inputs were a grid

  1. Optimisation of Lime-Soda process parameters for reduction of hardness in aqua-hatchery practices using Taguchi methods.

    Science.gov (United States)

    Yavalkar, S P; Bhole, A G; Babu, P V Vijay; Prakash, Chandra

    2012-04-01

    This paper presents the optimisation of Lime-Soda process parameters for the reduction of hardness in aqua-hatchery practices in the context of M. rosenbergii. The fresh water in the development of fisheries needs to be of suitable quality. Lack of desirable quality in available fresh water is generally the confronting restraint. On the Indian subcontinent, groundwater is the only source of raw water, having varying degree of hardness and thus is unsuitable for the fresh water prawn hatchery practices (M. rosenbergii). In order to make use of hard water in the context of aqua-hatchery, Lime-Soda process has been recommended. The efficacy of the various process parameters like lime, soda ash and detention time, on the reduction of hardness needs to be examined. This paper proposes to determine the parameter settings for the CIFE well water, which is pretty hard by using Taguchi experimental design method. Orthogonal Arrays of Taguchi, Signal-to-Noise Ratio, the analysis of variance (ANOVA) have been applied to determine their dosage and analysed for their effect on hardness reduction. The tests carried out with optimal levels of Lime-Soda process parameters confirmed the efficacy of the Taguchi optimisation method. Emphasis has been placed on optimisation of chemical doses required to reduce the total hardness using Taguchi method and ANOVA, to suit the available raw water quality for aqua-hatchery practices, especially for fresh water prawn M. rosenbergii.

  2. Optimisation of laser welding parameters for welding of P92 material using Taguchi based grey relational analysis

    Directory of Open Access Journals (Sweden)

    Shanmugarajan B.

    2016-08-01

    Full Text Available Creep strength enhanced ferritic (CSEF steels are used in advanced power plant systems for high temperature applications. P92 (Cr–W–Mo–V steel, classified under CSEF steels, is a candidate material for piping, tubing, etc., in ultra-super critical and advanced ultra-super critical boiler applications. In the present work, laser welding process has been optimised for P92 material by using Taguchi based grey relational analysis (GRA. Bead on plate (BOP trials were carried out using a 3.5 kW diffusion cooled slab CO2 laser by varying laser power, welding speed and focal position. The optimum parameters have been derived by considering the responses such as depth of penetration, weld width and heat affected zone (HAZ width. Analysis of variance (ANOVA has been used to analyse the effect of different parameters on the responses. Based on ANOVA, laser power of 3 kW, welding speed of 1 m/min and focal plane at −4 mm have evolved as optimised set of parameters. The responses of the optimised parameters obtained using the GRA have been verified experimentally and found to closely correlate with the predicted value.

  3. Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0

    Science.gov (United States)

    Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; Luke, Catherine M.

    2016-08-01

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.

  4. Towards a more representative parametrisation of hydrologic models via synthesizing the strengths of Particle Swarm Optimisation and Robust Parameter Estimation

    Directory of Open Access Journals (Sweden)

    T. Krauße

    2012-02-01

    the good parameters is still based on an ineffective Monte Carlo approach. Therefore we developed another approach called ROPE with Particle Swarm Optimisation (ROPE-PSO that substitutes the Monte Carlo approach with a more effective and efficient approach based on Particle Swarm Optimisation. Two case studies demonstrate the improvements of the developed algorithms when compared with the first ROPE approach and two other classical optimisation approaches calibrating a process oriented hydrologic model with hourly time step. The focus of both case studies is on modelling flood events in a small catchment characterised by extreme process dynamics. The calibration problem was repeated with higher dimensionality considering the uncertainty in the soil hydraulic parameters and another conceptual parameter of the soil module. We discuss the estimated results and propose further possibilities in order to apply ROPE as a well-founded parameter estimation and uncertainty analysis tool.

  5. Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines

    Science.gov (United States)

    Sadat Hashemipour, Maryam; Soleimani, Seyed Ali

    2016-01-01

    Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM training procedure along with the feature selection significantly impacts on the classification accuracy rate. In this study, AIS based on Adaptive Clonal Selection (AISACS) algorithm has been used to optimise the SVM parameters and feature subset selection without degrading the SVM classification accuracy. Several public datasets of University of California Irvine machine learning (UCI) repository are employed to calculate the classification accuracy rate in order to evaluate the AISACS approach then it was compared with grid search algorithm and Genetic Algorithm (GA) approach. The experimental results show that the feature reduction rate and running time of the AISACS approach are better than the GA approach.

  6. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    Science.gov (United States)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  7. Optimisation of culture parameters for exopolysaccharides production by the microalga Rhodella violacea.

    Science.gov (United States)

    Villay, Aurore; Laroche, Céline; Roriz, Diane; El Alaoui, Hicham; Delbac, Frederic; Michaud, Philippe

    2013-10-01

    A unicellular Rhodophyte was identified by sequencing of its 18S rRNA encoding gene as belonging to the Rhodella violacea specie. With the objective to optimise the production of biomass and exopolysaccharide by this strain, effects of irradiance, pH and temperature on its photosynthetic activity were investigated. In a second time a stoichiometric study of the well-known f/2 medium led to its supplementation in N and P to increase biomass and then exopolysaccharide yields when the strain was cultivated in photobioreactors. The use of optimal conditions of culture (irradiance of 420 μE/m(2)/s, pH of 8.3 and temperature of 24 °C) and f/2 supplemented medium led to significant increases of biomass and exopolysaccharide productions. The structural characterisation of the produced exopolysaccharide revealed that it was sulphated and mainly composed of xylose. The different culture conditions and culture media tested had no significant impact on the structure of produced exopolysaccharides.

  8. Simultaneous feature selection and parameter optimisation using an artificial ant colony: case study of melting point prediction

    Directory of Open Access Journals (Sweden)

    Nigsch Florian

    2008-10-01

    Full Text Available Abstract Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC, that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029. We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590 of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21 and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47 for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55. However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.

  9. Towards a more representative parametrisation of hydrological models via synthesizing the strengths of particle swarm optimisation and robust parameter estimation

    Directory of Open Access Journals (Sweden)

    T. Krauße

    2011-03-01

    Full Text Available The development of methods for estimating the parameters of hydrological models considering uncertainties has been of high interest in hydrological research over the last years. In particular methods which understand the estimation of hydrological model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008 presented a first proposal and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. Krauße and Cullmann (2011 further developed this method and applied it in a case study to calibrate a process oriented hydrological model with hourly time step focussing on flood events in a fast responding catchment. The results of both studies showed the potential of the application of the principle of data depth. However, also the weak point of the presented approach got obvious. The algorithm identifies a set of model parameter vectors with high model performance and subsequently generates a set of parameter vectors with high data depth with respect to the first set. These both steps are repeated iteratively until a stopping criterion is met. In the first step the estimation of the good parameter vectors is based on the Monte Carlo method. The major shortcoming of this method is that it is strongly dependent on a high number of samples exponentially growing with the dimensionality of the problem. In this paper we present another robust parameter estimation strategy which applies an approved search strategy for high-dimensional parameter spaces, the particle swarm optimisation in order to identify a set of good parameter vectors with given uncertainty bounds. The generation of deep parameters is according to Krauße and Cullmann (2011. The method was compared to the Monte Carlo based robust parameter estimation algorithm on the example of a case study in Krauße and Cullmann (2011 to

  10. Determination of kinetic parameters for complex transesterification reaction by standard optimisation methods

    Directory of Open Access Journals (Sweden)

    Almagrbi Abdualnaser Muftah

    2014-01-01

    Full Text Available This article presents a methodology for kinetic parameter estimation which is based on standard optimization methods. The parameter estimation procedure is applied to the example of modelling of non-catalytic transesterification reaction, based on laboratory experiments performed under elevated pressure. The kinetic model employed in this study consists of three consecutive and parallel reversible reactions of the second order, with six kinetic constants. The influence of the mass transfer effects was considered as well. The best results were obtained by Genetic Algorithm method. The application of this method resulted in kinetic parameters with improved accuracy in predicting concentrations of important reaction intermediates, i.e. diglycerides and monoglycerides. Activation energies of kinetic parameters obtained by the Genetic Algorithm method are in very good agreement with theoretical values determined by molecular orbital calculations. [Projekat Ministarstva nauke Republike Srbije, br. III-45019

  11. Optimal optimisation in chemometrics

    NARCIS (Netherlands)

    Hageman, Joseph Albert

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  12. Influence of regenerator matrix and working fluid on optimisation of design parameters of Stirling cryocoolers

    Science.gov (United States)

    Atrey, M. D.; Bapat, S. L.; Narayankhedkar, K. G.

    The performance of Stirling cryocooler is governed by principal designparameters. The optimum combination of these design parameters gives maximum refrigeration effect and minimum desired efforts. The performance of the cryocooler depends significantly on the regenerator functioning and the working fluids. The mesh size of the regenerator affects dead space, pressure drop, regenerator effectiveness, etc. The working fluids differ in their thermal properties and therefore affect the performance significantly, The present paper aims to study the influence of regenerator matrix and working fluids on these design parameters. The matrix material considered is Phosphor Bronze while the working fluids considered are Helium and Hydrogen.

  13. Cross-entropy optimisation of importance sampling parameters for statistical model checking

    CERN Document Server

    Jégourel, Cyrille; Sedwards, Sean

    2012-01-01

    Statistical model checking avoids the exponential growth of states associated with probabilistic model checking by estimating properties from multiple executions of a system and by giving results within confidence bounds. Rare properties are often very important but pose a particular challenge for simulation-based approaches, hence a key objective under these circumstances is to reduce the number and length of simulations necessary to produce a given level of confidence. Importance sampling is a well-established technique that achieves this, however to maintain the advantages of statistical model checking it is necessary to find good importance sampling distributions without considering the entire state space. Motivated by the above, we present a simple algorithm that uses the notion of cross-entropy to find the optimal parameters for an importance sampling distribution. In contrast to previous work, our algorithm uses a low dimensional vector of parameters to define this distribution and thus avoids the ofte...

  14. An optimised method for calculating the O+-O collision parameter from aeronomical measurements

    Directory of Open Access Journals (Sweden)

    A. Aruliah

    Full Text Available A study has been made of the interaction between the thermosphere and the ionosphere at high latitudes, with particular regard to the value of the O+-O collision parameter. The European incoherent scatter radar (EISCAT was used to make tristatic measurements of plasma parameters at F-region altitudes while simultaneous measurements of the neutral wind were made by a Fabry-Perot interferometer (FPI. The radar data were used to derive the meridional neutral winds in a way similar to that used by previous authors. The accuracy of this technique at high latitudes is reduced by the dynamic nature of the auroral ionosphere and the presence of significant vertical winds. The derived winds were compared with the meridional winds measured by the FPI. For each night, the value of the O+-O collision parameter which produced the best agreement between the two data sets was found. The precision of the collision frequency found in this way depends on the accuracy of the data. The statistical method was critically examined in an attempt to account for the variability in the data sets. This study revealed that systematic errors in the data, if unaccounted for by the analysis, have a tendency to increase the value of the derived collision frequency. Previous analyses did not weight each data set in order to account for the quality of the data; an improved method of analysis is suggested.

  15. Parameter optimisation of real-time control strategies for urban wastewater systems.

    Science.gov (United States)

    Schütze, M; Butler, D; Beck, M B

    2001-01-01

    Real-time control (RTC) of wastewater systems has been a topic of research and application for over two decades. Attempts so far have mainly focused on one of the parts of the urban wastewater system: either the sewer system, or the treatment plant or the river. Approaches to integrate these subsystems and considering them jointly for control purposes have been pursued only recently. Control of the systems aims at pursuing one (or several concomitant) objectives, which are expressed, for example, in terms of overflow volumes, loads, effluent concentrations, receiving water quality or monetary costs, to name just a few. This paper provides a general and formal definition of the problem to define a real time control algorithm for a given urban wastewater system. A general mathematical optimization problem is formulated, which describes the task of finding an (in some sense) optimum control algorithm. Since this optimization problem is, in the general case, highly non-linear with only limited information available about the objective function itself, optimization methods appropriate for this type of problem are identified. Here, the similarity of the problem to find a control algorithm and of the parameter estimation problem common in mathematical modelling becomes apparent. Hence, methods (and problems encountered) in parameter estimation can be transferred to the problem of determining optimum RTC algorithms. This parallelism is outlined in the paper. As an application of the parameterisation and optimization of control strategies, integrated control of an urban wastewater system is discussed. Since the analysis of integrated control as just described poses certain requirements on a simulation engine, a novel modelling tool, called SYNOPSIS, is utilized here. This simulation tool, comprising of modules simulating water quantity and quality processes in all parts of the urban wastewater system, is embedded into a suite of optimization procedures. An integrated RTC

  16. Effect of diet-induced obesity on kinetic parameters of amino acid uptake by rat erythrocytes.

    Science.gov (United States)

    Picó, C; Pons, A; Palou, A

    1992-11-01

    The effects of cafeteria diet-induced obesity upon in vitro uptake of L-Alanine, Glycine, L-Lysine, L-Glutamine, L-Glutamic acid, L-Phenylalanine and L-Leucine by isolated rat erythrocytes have been studied. The total Phe and Leu uptakes followed Michaelis-Menten kinetics. The Glu uptake was fitted to diffusion kinetics. The uptakes of Ala, Gly, Lys and Gln were best explained by a two-component transport: one saturable and one diffusion. Obesity increased the Km value for Ala, Gln and Leu, and the Vmax value for Ala, but decreased the Vmax for Lys. Kinetic parameters of Phe uptake were unaffected by obesity. In addition, the pseudo-first order rate constant (Vmax/Km) for Ala, Gly, Gln, Lys and Leu uptake decreased as a result of cafeteria diet-induced obesity. The Kd value for Ala, Gly, Gln and Glu decreased and that of Lys increased as result of obesity. These adaptations could, at least in part, explain alterations in amino acid distribution between blood cells and plasma related to overfeeding or obesity.

  17. Optimisation of glaciological parameters for ice core chronology by implementing counted layers between identified depth levels

    Science.gov (United States)

    Bazin, L.; Lemieux-Dudon, B.; Landais, A.; Guillevic, M.; Kindler, P.; Parrenin, F.; Martinerie, P.

    2014-08-01

    A~recent coherent chronology has been built for 4 Antarctic ice cores and the NorthGRIP (NGRIP) Greenland ice core (Antarctic Ice Core Chronology 2012, AICC2012) using a bayesian approach for ice core dating (Datice). When building the AICC2012 chronology, and in order to prevent any confusion with official ice cores chronology, it has been imposed that the AICC2012 chronology for NGRIP should respect exactly the GICC05 chronology based on layer counting. However, such a strong tuning did not satisfy the hypothesis of independence of background parameters and observations for the NGRIP core as required by Datice. We present here the implementation in Datice of a new type of markers that is better suited to constraints deduced from layer counting: the markers of age-difference. Using this type of markers for NGRIP in a 5 cores dating exercise with Datice, we have performed several sensitivity tests and show that the new ice core chronologies obtained with these new markers do not differ by more than 400 years from AICC2012 for Antarctic ice cores and by more than 130 years from GICC05 for NGRIP over the last 60 000 years. With this new parameterization, the accumulation rate and lock-in depth associated with NGRIP are more coherent with independent estimates than those obtained in AICC2012. While these new chronologies should not be used yet as new ice core chronologies, the improved methodology presented here should be considered in the next coherent ice core dating exercise.

  18. Optimising Drug Solubilisation in Amorphous Polymer Dispersions: Rational Selection of Hot-melt Extrusion Processing Parameters.

    Science.gov (United States)

    Li, Shu; Tian, Yiwei; Jones, David S; Andrews, Gavin P

    2016-02-01

    The aim of this article was to construct a T-ϕ phase diagram for a model drug (FD) and amorphous polymer (Eudragit® EPO) and to use this information to understand the impact of how temperature-composition coordinates influenced the final properties of the extrudate. Defining process boundaries and understanding drug solubility in polymeric carriers is of utmost importance and will help in the successful manufacture of new delivery platforms for BCS class II drugs. Physically mixed felodipine (FD)-Eudragit(®) EPO (EPO) binary mixtures with pre-determined weight fractions were analysed using DSC to measure the endset of melting and glass transition temperature. Extrudates of 10 wt% FD-EPO were processed using temperatures (110°C, 126°C, 140°C and 150°C) selected from the temperature-composition (T-ϕ) phase diagrams and processing screw speed of 20, 100 and 200rpm. Extrudates were characterised using powder X-ray diffraction (PXRD), optical, polarised light and Raman microscopy. To ensure formation of a binary amorphous drug dispersion (ADD) at a specific composition, HME processing temperatures should at least be equal to, or exceed, the corresponding temperature value on the liquid-solid curve in a F-H T-ϕ phase diagram. If extruded between the spinodal and liquid-solid curve, the lack of thermodynamic forces to attain complete drug amorphisation may be compensated for through the use of an increased screw speed. Constructing F-H T-ϕ phase diagrams are valuable not only in the understanding drug-polymer miscibility behaviour but also in rationalising the selection of important processing parameters for HME to ensure miscibility of drug and polymer.

  19. Determination of kinetic parameters for 123-I thyroid uptake in healthy Japanese

    Science.gov (United States)

    Kusuhara, Hiroyuki; Maeda, Kazuya

    2017-09-01

    The purpose of this study was to compare the kinetic parameters for iodide thyroid accumulation in Japanese today with previously reported values. We determined the thyroid uptake of 123-I at 24 hours after the oral administration in healthy male Japanese without any diet restriction. The mean value was 16.1±5.4%, which was similar or rather lower than those previously reported in Japan (1958-1972). Kinetic model analysis was conducted to obtain the clearance for thyroid uptake from the blood circulation. The thyroid uptake clearance of 123-I was 0.540±0.073 ml/min, which was almost similar to those reported previously. There is no obvious difference in the thyroid uptake for 24 hours, and kinetic parameters in healthy Japanese for these 50 years. The fraction of distributed to the thyroid gland is lower than the ICRP reference man, and such difference must be taken into consideration to estimate the radiation exposure upon Fukushima accident in Japan.

  20. Mass-based hygroscopicity parameter interaction model and measurement of atmospheric aerosol water uptake

    Directory of Open Access Journals (Sweden)

    E. Mikhailov

    2013-01-01

    Full Text Available In this study we derive and apply a mass-based hygroscopicity parameter interaction model for efficient description of concentration-dependent water uptake by atmospheric aerosol particles with complex chemical composition. The model approach builds on the single hygroscopicity parameter model of Petters and Kreidenweis (2007. We introduce an observable mass-based hygroscopicity parameter κm which can be deconvoluted into a dilute hygroscopicity parameterm0 and additional self- and cross-interaction parameters describing non-ideal solution behavior and concentration dependencies of single- and multi-component systems.

    For reference aerosol samples of sodium chloride and ammonium sulfate, the κm-interaction model (KIM captures the experimentally observed concentration and humidity dependence of the hygroscopicity parameter and is in good agreement with an accurate reference model based on the Pitzer ion-interaction approach (Aerosol Inorganic Model, AIM. Experimental results for pure organic particles (malonic acid, levoglucosan and for mixed organic-inorganic particles (malonic acid – ammonium sulfate are also well reproduced by KIM, taking into account apparent or equilibrium solubilities for stepwise or gradual deliquescence and efflorescence transitions.

    The mixed organic-inorganic particles as well as atmospheric aerosol samples exhibit three distinctly different regimes of hygroscopicity: (I a quasi-eutonic deliquescence & efflorescence regime at low-humidity where substances are just partly dissolved and exist also in a non-dissolved phase, (II a gradual deliquescence & efflorescence regime at intermediate humidity where different solutes undergo gradual dissolution or solidification in the aqueous phase; and (III a dilute regime at high humidity where the solutes are fully dissolved approaching their dilute hygroscopicity.

    For atmospheric aerosol samples

  1. Optimisation of GTAW parameters for the tensile strength of AISI 304 stainless steel welds using the Taguchi method

    Energy Technology Data Exchange (ETDEWEB)

    Ertan, Rukiye

    2012-07-01

    The influence of welding parameters, i. e. the welding current, root gap and the shielding gas flow rate, on the tensile strength of AISI 304 austenitic stainless steel welded with gas tungsten arc welding was investigated. To determine the optimum levels of the parameters, the Taguchi approach was used to increase the tensile strength. The results were analysed by investigating the variance by which welding parameters significantly affect the response. Mathematical models were developed to describe the influence of the selected parameters on the tensile strength. The results were confirmed by experiments. [German] Der Einfluss der Schweissparameter, d. h. der Schweissstromstaerke, des Wurzelspaltes und des Schutzgasdurchflusses, auf die Zugfestigkeit des WIG-geschweissten hochlegierten Stahles AISI 304 ist in der diesem Beitrag zugrunde liegenden Studie untersucht worden. Um die optimale Groesse der Parameter in Bezug auf eine Erhoehung der Zugfestigkeit zu bestimmen, wurde der Ansatz nach Taguchi gewaehlt. Die Ergebnisse wurden analysiert, indem die Varianz ermittelt wurde, mit der die Schweissparameter signifikant die Ergebnisse veraendern. Es wurden mathematische Modelle entwickelt, um den Einfluss der gewaehlten Parameter auf die Zugfestigkeit zu beschreiben. Die Ergebnisse wurden durch Experimente bestaetigt.

  2. Researches regarding the reducing of burr size by optimising the cutting parameters on a CNC milling machine

    Directory of Open Access Journals (Sweden)

    Biriş Cristina

    2017-01-01

    Full Text Available This paper presents some experimental researches regarding burrs dimensions reduction that appear after the milling process together with an approach to reduce or eliminate the burrs resulted after this process. In order to reduce burrs dimensions, the milling process was executed with different cutting parameters and strategies then the results were evaluated.

  3. Enhancing Uranium Uptake by Amidoxime Adsorbent in Seawater: An investigation for optimum alkaline conditioning parameters

    Energy Technology Data Exchange (ETDEWEB)

    Das, S.; Tsouris, Constantinos; Zhang, C.; Kim, J.; Brown, S.; Oyola, Yatsandra; Janke, C.; Mayes, R. T.; Kuo, Li-Jung; Wood, Jordana R.; Gill, Gary A.; Dai, Sheng

    2016-04-20

    A high-surface-area polyethylene-fiber adsorbent (AF160-2) has been developed at the Oak Ridge National Laboratory (ORNL) by radiation-induced graft polymerization of acrylonitrile and itaconic acid. The grafted nitriles were converted to amidoxime groups by treating with hydroxylamine. The amidoximated adsorbents were then conditioned with potassium hydroxide (KOH) by varying different reaction parameters such as KOH concentration (0.2, 0.44, and 0.6 M), duration (1, 2, and 3 h), and temperature (60, 70, and 80 ºC). Adsorbent screening was then performed with simulated seawater solutions containing sodium chloride and sodium bicarbonate, at concentrations found in seawater, and uranium nitrate at a uranium concentration of ~ 7-8 ppm and pH 8. FTIR and solid state NMR indicated that a fraction of amidoxime groups was hydrolyzed to carboxylate during KOH conditioning. The uranium adsorption capacity in the simulated seawater screening solution gradually increased with conditioning time and temperature for all KOH concentrations. It was also observed that the adsorption capacity increased with an increase in concentration of KOH for all the conditioning times and temperatures. AF160-2 adsorbent samples were also tested with natural seawater using flow-through experiments to determine uranium adsorption capacity with varying KOH conditioning time and temperature. Based on uranium loading capacity values of several AF160-2 samples, it was observed that changing KOH conditioning time from 3 to 1 h at 60, 70, and 80 ºC resulted in increase of the uranium loading capacity in seawater, which did not follow the trend found in laboratory screening with stimulated solutions. Longer KOH conditioning times lead to significantly higher uptake of divalent metal ions, such as calcium and magnesium, which is a result of amidoxime conversion into less selective carboxylate. Scanning electron microscopy showed that long conditioning times may also lead to adsorbent degradation

  4. Applying genetic algorithms in a parallel computing environment for optimising parameters of complex cellular automata models: the case of SCIDDICA S3hex

    Science.gov (United States)

    D'Ambrosio, D.; Iovine, G.

    2003-04-01

    Cellular Automata (CA) offer a valid alternative to the classic approach, based on partial differential equation, in order to simulate complex phenomena, when these latter can be described in terms of local interactions among their constituent parts. SCIDDICA S3hex is a two-dimensional hexagonal CA model developed for simulating debris flows: it has recently been applied to several real cases of landslides occurred in Campania (Southern Italy). The release S3hex has been derived by progressively improving an initial simplified CA model, originally derived for simulating simple cases of flow-type landslides. The model requires information related to topography, thickness of erodable regolith overlying the bedrock, and location and extension of landslide sources. Performances depend on a set of global parameters which are utilised in the transition function of the model: their value affect the elementary processes of the transition function and thus the overall results. A fine calibration is therefore an essential phase, in order to evaluate the reliability of the model for successive applications to debris-flow susceptibility zonation. The complexity of both the model and the phenomena to be simulated suggested to employ an automated technique of evaluation, for the determination of the best set of global parameters. Genetic Algorithms (GA) are a powerful optimization tool inspired to natural selection. In the last decades, in spite of their intrinsic simplicity, they have largely been successfully applied on a wide number of highly complex problems. The calibration of the model could therefore be performed through such technique of optimisation, by considering several real cases of study. Owing to the large number of simulations generally needed for performing GA experiments on complex phenomena, which imply long lasting tests on sequential computational architectures, the adoption of a parallel computational environment seemed appropriate: the original source code

  5. Using the Box-Behnken experimental design to optimise operating parameters in pulsed spray fluidised bed granulation.

    Science.gov (United States)

    Liu, Huolong; Wang, Ke; Schlindwein, Walkiria; Li, Mingzhong

    2013-05-20

    In this work, the influence factors of pulsed frequency, binder spray rate and atomisation pressure of a top-spray fluidised bed granulation process were studied using the Box-Behnken experimental design method. Different mathematical models were developed to predict the mean size of granules, yield, relative width of granule distribution, Hausner ratio and final granule moisture content. The study has supported the theory that the granule size can be controlled through the liquid feed pulsing. However, care has to be taken when the pulsed frequency is chosen for controlling the granule size due to the nonlinear quadratic relation in the regression model. The design space of the ranges of operating parameters has been determined based on constraints of the mean size of granules and granule yield. High degree of prediction obtained from validation experiments has shown the reliability and effectiveness of using the Box-Behnken experimental design method to study a fluidised bed granulation process. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Comparison of FDG Uptake with Pathological Parameters in the Well-differentiated Thyroid Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Woo Hee; Chung, Yong An; Kim, Ki Jun; Park, Chang Suk; Jung, Hyun Suk; Sohn, Hyung Sun; Chung, Soo Kyo; Yoo, Chang Young [College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of)

    2009-02-15

    Differentiated thyroid cancer (DTC) has variable degree of F-18 FDG avidity. The purpose of this study was to evaluate the relationship between F-18 FDG uptake and pathological or immunohistochemical features of DTC. DTC patients who underwent both pre-operative F-18 FDG PET/CT scan and surgery were included in the study. Maximum standardized uptake values (SUVmax) of primary tumor were calculated. If the primary tumor showed no perceptibly increased F-18 FDG uptake, region of interest was drawn based on finding of CT portion of the PET/CT images. Pathological and immunohistochemical markers such as presence of lymph node (LN) metastasis and underlying thyroiditis, tumor size, Ki-67 labeling index, expressions of EGFR, COX-2, and Galectin-3 were evaluated. Total of 106 patients was included (102 papillary carcinomas, 4 follicular carcinomas). The mean SUVmax of the large tumors (above 1 cm) was significantly higher than the mean SUVmax of small (equal to or less than 1 cm) ones (7.8{+-}8.5 vs. 3.6{+-}3.1, p=0.004). No significant difference in F-18 FDG uptake was found according to the presence or absence of LN metastasis and underlying thyroiditis, or the degree of Ki-67 labeling index, expression of EGFR, COX-2 and Galectin-3. In conclusion, the degree of F-18 FDG uptake in DTC was associated with the size of primary tumor. But there seem to be no relationship between F-18 FDG uptake of DTC and expression of Ki-67, EGFR, COX-2 and Galectin-3.

  7. How do alternative root water uptake models affect the inverse estimation of soil hydraulic parameters and the prediction of evapotranspiration?

    Science.gov (United States)

    Gayler, Sebastian; Salima-Sultana, Daisy; Selle, Benny; Ingwersen, Joachim; Wizemann, Hans-Dieter; Högy, Petra; Streck, Thilo

    2016-04-01

    Soil water extraction by roots affects the dynamics and distribution of soil moisture and controls transpiration, which influences soil-vegetation-atmosphere feedback processes. Consequently, root water uptake requires close attention when predicting water fluxes across the land surface, e.g., in agricultural crop models or in land surface schemes of weather and climate models. The key parameters for a successful simultaneous simulation of soil moisture dynamics and evapotranspiration in Richards equation-based models are the soil hydraulic parameters, which describe the shapes of the soil water retention curve and the soil hydraulic conductivity curve. As measurements of these parameters are expensive and their estimation from basic soil data via pedotransfer functions is rather inaccurate, the values of the soil hydraulic parameters are frequently inversely estimated by fitting the model to measured time series of soil water content and evapotranspiration. It is common to simulate root water uptake and transpiration by simple stress functions, which describe from which soil layer water is absorbed by roots and predict when total crop transpiration is decreased in case of soil water limitations. As for most of the biogeophysical processes simulated in crop and land surface models, there exist several alternative functional relationships for simulating root water uptake and there is no clear reason for preferring one process representation over another. The error associated with alternative representations of root water uptake, however, contributes to structural model uncertainty and the choice of the root water uptake model may have a significant impact on the values of the soil hydraulic parameters estimated inversely. In this study, we use the agroecosystem model system Expert-N to simulate soil moisture dynamics and evapotranspiration at three agricultural field sites located in two contrasting regions in Southwest Germany (Kraichgau, Swabian Alb). The Richards

  8. Robust optimisation of railway crossing geometry

    Science.gov (United States)

    Wan, Chang; Markine, Valeri; Dollevoet, Rolf

    2016-05-01

    This paper presents a methodology for improving the crossing (frog) geometry through the robust optimisation approach, wherein the variability of the design parameters within a prescribed tolerance is included in the optimisation problem. Here, the crossing geometry is defined by parameterising the B-spline represented cross-sectional shape and the longitudinal height profile of the nose rail. The dynamic performance of the crossing is evaluated considering the variation of wheel profiles and track alignment. A multipoint approximation method (MAM) is applied in solving the optimisation problem of minimising the contact pressure during the wheel-rail contact and constraining the location of wheel transition at the crossing. To clarify the difference between the robust optimisation and the normal deterministic optimisation approaches, the optimisation problems are solved in both approaches. The results show that the deterministic optimum fails under slight change of the design variables; the robust optimum, however, has improved and robust performance.

  9. Quantification of FDG PET studies using standardised uptake values in multi-centre trials : effects of image reconstruction, resolution and ROI definition parameters

    NARCIS (Netherlands)

    Westerterp, Marinke; Pruim, Jan; Oyen, Wim; Hoekstra, Otto; Paans, Anne; Visser, Eric; van Lanschot, Jan; Sloof, Gerrit; Boellaard, Ronald

    2007-01-01

    Purpose: Standardised uptake values (SUVs) depend on acquisition, reconstruction and region of interest (ROI) parameters. SUV quantification in multicentre trials therefore requires standardisation of acquisition and analysis protocols. However, standardisation is difficult owing to the use of diffe

  10. Systematic delay-driven power optimisation and power-driven delay optimisation of combinational circuits

    OpenAIRE

    Mehrotra, Rashmi

    2013-01-01

    With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation o...

  11. Age dependency of oxygen uptake and related parameters in exercise testing: an expert opinion on reference values suitable for adults.

    Science.gov (United States)

    Schneider, Joachim

    2013-10-01

    Spiroergometry has been established to determine physical capacity. Reference values collected mostly in a younger population can be obtained from a number of studies and therefore may differ. Regression equations are complex and cannot be transferred easily to clinical practice. Our aim was to obtain reference values for spiroergometric parameters in cardiopulmonary exercise in healthy adult populations. Eighteen studies of healthy adults (>40 years) that assessed maximal oxygen uptake (VO2max), oxygen uptake in relation to body weight (VO2max/kg), and oxygen pulse (VO2max/heart rate) were included. After data processing, spiroergometric parameters were correlated to age. Regression analysis was performed separately for each study and also weighted with the number of participants. For all spiroergometric parameters, age dependency was detectable for both males and females. After performing regression analysis, the following linear regression equations were determined: VO2max: Males = -28 × age (years) + 4,000; females = -20 × age (years) + 2,700 (ml/min); VO2max/kg: Males = -0.42 × age (years) + 58; females = -0.35 × age (years) + 46 (ml/min/kg); VO2max/heart rate: Males = -0.10 × age (years) + 20.50; females = -0.05 × age (years) + 13 (ml/min/heart rate). The present study provides practicable reference values for the spiroergometric parameters of adult men and women.

  12. [Parameters of oxygen uptake and carbon dioxide output ventilatory efficiency during exercise are index of circulatory function in normal subjects].

    Science.gov (United States)

    Sun, Xingguo; Wang, Guizhi; Lyu, Jing; Tan, Xiaoyue; William, W Stringer; Karlman, Wasserman

    2014-12-01

    To observe oxygen uptake efficiency plateau (OUEP, i.e.highest V˙O2/V˙E) and carbon dioxide output efficiency (lowest V˙E/V˙CO2) parameter changes during exercise in normal subjects. Five healthy volunteers performed the symptom limited maximal cardiopulmonary exercise test (CPET) at Harbor-UCLA Medical Center. V˙O2/V˙E and V˙E/V˙CO2 were determined by both arterial and central venous catheters. After blood gas analysis of arterial and venous sampling at the last 30 seconds of every exercise stage and every minute of incremental loading, the continuous parameter changes of hemodynamics, pulmonary ventilation were monitored and oxygen uptake ventilatory efficiency (V˙O2/V˙E and V˙E/V˙CO2) was calculated. During CPET, as the loading gradually increased, cardiac output, heart rate, mixed venous oxygen saturation, arteriovenous oxygen difference, minute ventilation, minute alveolar ventilation, tidal volume, alveolar ventilation and pulmonary ventilation perfusion ratio increased near-linearly (P change (P > 0.05); stroke volume, respiratory rate, arterial partial pressure of carbon dioxide, arterial blood hydrogen ion concentration and dead space ventilation ratio significantly changed none-linearly (compare resting state P exercise increased from 30.9 ± 3.3 at resting state to the highest plateau 46.0 ± 4.7 (P exercise. The V˙E/V˙CO2 during exercise decreased from the resting state (39.2 ± 6.5) to the minimum value (24.2 ± 2.4) after AT for a few minutes (P > 0.05 vs.earlier stage), then gradually increased after the ventilatory compensation point (P exercise. Cardiac and lung function as well as metabolism change during CPET is synchronous.In the absence of pulmonary limit, appearing before and after anaerobic threshold, OUEP and lowest V˙E/V˙CO2 could be used as reliable parameters representing the circulatory function.

  13. Evolutionary programming for neutron instrument optimisation

    Science.gov (United States)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelièvre-Berna, Eddy

    2006-11-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  14. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  15. Modern parameters of caesium-137 root uptake in natural and agricultural grass ecosystems of contaminated post-Chernobyl landscape, Russia

    Directory of Open Access Journals (Sweden)

    Tatiana Paramonova

    2015-01-01

    Full Text Available The estimation of modern parameters of 137Cs root uptake was conducted in natural meadow and agricultural ecosystems of post-Chernobyl landscapes of Tula region. The agrosystems with main crops of field rotation (barley, potatoes, rape, maize occupying watersheds and slopes with arable chernozems are contaminated at a level 460-670 Bq/kg (4.7-6.0 Ci/km2; natural meadow ecosystems occupying lower parts of slopes and floodplains are contaminated at a level 620-710 Bq/kg (5.8-7.6 Ci/km2. In the arable soils 137Cs uniformly distributed to a depth of Ap horizon (20-30 cm of thickness, while in meadow soils 70-80% of the radionuclide is concentrated within the top Ad horizon (9-13 cm of thickness. These topsoil layer accords with rhizosphere zone, where >80-90% of plant roots are concentrated, and from which 137Cs is mostly consumed by vegetation. Total amount of 137Cs root uptake depends on the level of soil radioactive contamination (correlation coefficient 0.61. So 137Cs activity in meadow vegetation (103-160 Bq/kg is generally more than one in agricultural vegetation (9-92 Bq/kg. The values of 137Cs transfer factor in the studied ecosystems vary from 0.01 (rape to 0.20 (wet meadow, that confirms the discrimination of the radionuclide’s root uptake. The larger are the volume of roots and their absorbing surface, the higher are the values of transfer factor from soil to plant (correlation coefficients 0.71 and 0.64 respectively. 137Cs translocation from roots to shoots is also determined by biological features of plants. At the same level of soil contamination above-ground parts of meadow herbs accumulate more 137Cs than Gramineae species, and in agrosystems above-ground parts of weeds concentrate more 137Cs than cultivated cereals. Thus, the level of soil radioactive pollution and biological features of plants are determinants in the process of 137Cs root uptake and translocation and should be considered in land use policy.

  16. Optimising of design parameters of the TESLA vertex detector and search for events with isolated leptons and large missing transverse momentum with the ZEUS-experiment (HERA II)

    Energy Technology Data Exchange (ETDEWEB)

    Adler, V.

    2006-06-15

    In this thesis, a search for events with isolated leptons and large missing transverse momentum at HERA is presented. Data with an integrated luminosity of 40.76 pb{sup -1} of e{sup +}p-collisions collected with the ZEUS detector at a CMS energy of 318 GeV during the HERA II running period in the years 2003 and 2004 were used. Some extensions of the SM contain FCNC processes at tree level, which could lead to a significantly enhanced rate of singly produced t-quarks at HERA (e{sup {+-}}p {yields} e{sup {+-}}tX). The signature of interest originates from the decay t {yields} bW{sup +} with a subsequent leptonic decay of the W-boson (W{sup +} {yields} e{sup +}{nu}{sub e}, {mu}{sup +}{nu}{sub {mu}}, {tau}{sup +}{nu}{sub {tau}}). After the final selection, one event was found in data in the combined e- and {mu}-channels, where 1.27{+-}0.15 were expected from SM predictions. The selection efficiency in these channels was 13.4{sup +1.8}{sub -0.8}% for a t-quark mass of 175 GeV. In combination with independent searches in HERA I data in both, the leptonic and hadronic channel, limits on the FCNC couplings through photon and Z-boson exchange were derived. The NLO limit {kappa}{sub tu{gamma}}<0.160{sup +0.014}{sub -0.012} at 95% CL for a t-quark mass of 175 GeV is the most stringent so far. Together with the most stingent limit on v{sub tuz} of 0.37, an upper cross section limit of {sigma}{sub single} {sub t}<0.186{sup +0.029}{sub -0.012} pb was obtained.Also a limit on the cross section of single W-boson production of {sigma}{sub single} {sub W}<1.54{sup +0.67}{sub -0.41} pb was obtained at 95% CL. In this thesis, also a simulation study to optimise design parameters of a MAPS based vertex detector for a future ILC is presented. The study was based on the TESLA TDR. In order to evaluate the effect of different design options for the vertex detector on the physics performance of the whole detector, the reconstruction of the t-quark mass from the signal process e{sup +}e

  17. Predictive significance of standardized uptake value parameters of FDG-PET in patients with non-small cell lung carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Duan, X-Y.; Wang, W.; Li, M.; Li, Y.; Guo, Y-M. [PET-CT Center, The First Affiliated Hospital of Xi' an, Jiaotong University, Xi' an, Shaanxi (China)

    2015-02-03

    {sup 18}F-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) is widely used to diagnose and stage non-small cell lung cancer (NSCLC). The aim of this retrospective study was to evaluate the predictive ability of different FDG standardized uptake values (SUVs) in 74 patients with newly diagnosed NSCLC. {sup 18}F-FDG PET/CT scans were performed and different SUV parameters (SUV{sub max}, SUV{sub avg}, SUV{sub T/L}, and SUV{sub T/A}) obtained, and their relationship with clinical characteristics were investigated. Meanwhile, correlation and multiple stepwise regression analyses were performed to determine the primary predictor of SUVs for NSCLC. Age, gender, and tumor size significantly affected SUV parameters. The mean SUVs of squamous cell carcinoma were higher than those of adenocarcinoma. Poorly differentiated tumors exhibited higher SUVs than well-differentiated ones. Further analyses based on the pathologic type revealed that the SUV{sub max}, SUV{sub avg}, and SUV{sub T/L} of poorly differentiated adenocarcinoma tumors were higher than those of moderately or well-differentiated tumors. Among these four SUV parameters, SUV{sub T/L} was the primary predictor for tumor differentiation. However, in adenocarcinoma, SUV{sub max} was the determining factor for tumor differentiation. Our results showed that these four SUV parameters had predictive significance related to NSCLC tumor differentiation; SUV{sub T/L} appeared to be most useful overall, but SUV{sub max} was the best index for adenocarcinoma tumor differentiation.

  18. Design Optimisation and Conrol of a Pilot Operated Seat Valve

    DEFF Research Database (Denmark)

    Nielsen, Brian; Andersen, Torben Ole; Hansen, Michael Rygaard

    2004-01-01

    The paper gives an approach for optimisation of the bandwidth of a pilot operated seat valve for mobile applications. Physical dimensions as well as parameters of the implemented control loop are optimised simultaneously. The frequency response of the valve varies as a function of the pressure drop...... across the valve, and it is found to be necessary to scale the controller parameters in the optimised design as a function of pressure drop....

  19. Clinical Safety and Parameters of Maximum Oxygen Uptake (VO₂Max) Testing in Pakistani Patients With Heart Failure.

    Science.gov (United States)

    Hussain, Sajjad; Kayani, Azhar Mahmood; Munir, Rubab

    2015-09-01

    To determine the parameters of maximum oxygen uptake (VO2max) in a Pakistani systolic heart failure cohort and its safety in a clinical setting. Descriptive study. Armed Forces Institute of Cardiology, National Institute of Heart Diseases, Rawalpindi, from June 2011 to January 2013. Maximum oxygen uptake test was performed in patients with severe heart failure, who could perform the VO2max treadmill test. Age, Body Mass Index (BMI) ejection fraction, VO2max and respiratory exchange ratios and their correlations were determined. Out of 135 patients, 77% (n=104) were males, with a mean age of 45.9 ±15.7 years. Weight of patients ranged from 30 kg to 107 kg (mean 63.29 ±13.6 kg); mean BMI was 23.16 ±4.56 kg/m2. All patients presented with either NYHA class of III (50.3%; n=68) or IV (49.7%; n=67); mean ejection fraction was 22.54 ±5.7% (10 - 35%, IQ:20 - 25). The VO2 max of the patients ranged from 3 to 32 ml/kg/minute (mean 12.85 ±4.49 ml/kg/minute). Respiratory exchange ratio was over 1 for all patients (1.12 - 1.96, mean = 1.36 ±0.187). There was a negative correlation with age (r = -0.204; p = 0.028) whereas a positive correlation was found with exercise time (r = 0.684; p = 0.000), hemoglobin (r = 0.190; p = 0.047) and ejection fraction (r = 0.187 ; p = 0.044). Cardiopulmonary exercise testing in a high-risk heart failure cohort is safe and provides information beyond the routine clinical evaluation of heart failure patients.

  20. A cancer research UK pharmacokinetic study of BPA-mannitol in patients with high grade glioma to optimise uptake parameters for clinical trials of BNCT

    Energy Technology Data Exchange (ETDEWEB)

    Cruickshank, G.S. [University of Birmingham and University Hospital Birmingham, Birmingham (United Kingdom)], E-mail: garth.cruickshank@uhb.nhs.uk; Ngoga, D.; Detta, A.; Green, S.; James, N.D.; Wojnecki, C.; Doran, J.; Hardie, J.; Chester, M.; Graham, N.; Ghani, Z. [University of Birmingham and University Hospital Birmingham, Birmingham (United Kingdom); Halbert, G.; Elliot, M.; Ford, S. [CR-UK Formulation Unit, University of Strathclyde, Glasgow (United Kingdom); Braithwaite, R.; Sheehan, T.M.T. [Regional Laboratory for Toxicology, Sandwell and West Birmingham Hospitals Trust, Birmingham (United Kingdom); Vickerman, J.; Lockyer, N. [Surface Analysis Research Centre, University of Manchester, Manchester (United Kingdom); Steinfeldt, H.; Croswell, G. [CR-UK Drug Development Office, London (United Kingdom)] (and others)

    2009-07-15

    This paper describes results to-date from a human pharmacokinetic study which began recruitment in December 2007. Results are presented for a single patient recruited in December 2007. A second patient was recruited in July 2008 but detailed data are not available at the time of writing. The trial is an open-label, non-comparative, non-therapeutic study of BPA-mannitol in patients with high-grade glioma, who will be undergoing stereotactic brain biopsy as part of the diagnostic process before definitive treatment. The study investigates the route of infusion (intra-venous (IV) or intra-carotid artery) and in each case will assess the effect of administration of mannitol as a blood-brain barrier disrupter. All cohorts will receive a 2 h infusion of BPA-mannitol, and for some cohorts an additional mannitol bolus will be administered at the beginning of this infusion. Measurements are made by inductively coupled plasma mass spectrometry (ICP-MS) of {sup 10}B concentration in samples of blood, urine, extra-cellular fluid in normal brain (via a dialysis probe), brain tissue around tumour and tumour tissue. Additional analysis of the tumour tissue is performed using secondary ion mass spectrometry (SIMS). The first patient was part of the cohort having intra-venous infusion without mannitol bolus. No serious clinical problems were experienced and the assay results can be compared with available patient data from other BNCT centres. In particular we note that the peak {sup 10}B concentration in blood was 28.1 mg/ml for a total BPA administration of 350 mg/kg which is very consistent with the previous experience with BPA-fructose reported by the Helsinki group.

  1. Magnetic resonance imaging for diagnostic evaluation of hernia of an invertebral disk. Optimisation of imaging parameters by application of various magnetic field intensities

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, H.K.; Oppel, G.; Bluemm, R.; Uhlenbrock, D.

    1988-02-01

    The article reports experience gained within three years with diagnostic NMR imaging of the lumbar spine. On the basis of results obtained by almost 500 examinations, an optimisation concept with regard to measuring sequences and orientation of sectional cuts is presented. Imaging of the spine in three planes, with sectional layer thickness between 3 and 5 mm, using a 1.5 Tesla system, seems to yield the diagnostic optimum, and in our opinion is superior over invasive myelography and CT scanning. A prospective study we made indicates a hit rate of 97.2%, and of 100% for evaluation of the results obtained with the 1.5 Tesla system together with an evaluation of the paraxial sections. The magnetic field intensity of 1.5 Tesla especially improves the quality of images of paraxial cuts as compared with the 0.5 Tesla field system, due to the better contrast-to-noise ratio, and thinner sections.

  2. Computer Based Optimisation Rutines

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  3. Computer Based Optimisation Rutines

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  4. An optimisation method for complex product design

    Science.gov (United States)

    Li, Ni; Yi, Wenqing; Bi, Zhuming; Kong, Haipeng; Gong, Guanghong

    2013-11-01

    Designing a complex product such as an aircraft usually requires both qualitative and quantitative data and reasoning. To assist the design process, a critical issue is how to represent qualitative data and utilise it in the optimisation. In this study, a new method is proposed for the optimal design of complex products: to make the full use of available data, information and knowledge, qualitative reasoning is integrated into the optimisation process. The transformation and fusion of qualitative and qualitative data are achieved via the fuzzy sets theory and a cloud model. To shorten the design process, parallel computing is implemented to solve the formulated optimisation problems. A parallel adaptive hybrid algorithm (PAHA) has been proposed. The performance of the new algorithm has been verified by a comparison with the results from PAHA and two other existing algorithms. Further, PAHA has been applied to determine the shape parameters of an aircraft model for aerodynamic optimisation purpose.

  5. Kinetics and conductivity parameters of uptake and transport of polychlorinated biphenyls in the Caco-2 intestinal cell line model

    NARCIS (Netherlands)

    Dulfer, W.J.; Govers, H.A.J.; Groten, J.P.

    1998-01-01

    Most of the accumulation of polychlorinated biphenyls (PCBs) over the food chain can be attributed to contaminant uptake from food. The effect of fatty acid absorption on net uptake and transport fluxes of a selection of 14 PCBs over the organismal gut epithelium has been determined in monolayers of

  6. Kinetics and conductivity parameters of uptake and transport of polychlorinated biphenyls in the Caco-2 intestinal cell line model

    NARCIS (Netherlands)

    Dulfer, W.J.; Govers, H.A.J.; Groten, J.P.

    1998-01-01

    Most of the accumulation of polychlorinated biphenyls (PCBs) over the food chain can be attributed to contaminant uptake from food. The effect of fatty acid absorption on net uptake and transport fluxes of a selection of 14 PCBs over the organismal gut epithelium has been determined in monolayers of

  7. Intracranial Pressure Monitoring as a Part of Multimodal Monitoring Management of Patients with Critical Polytrauma: Correlation between Optimised Intensive Therapy According to Intracranial Pressure Parameters and Clinical Picture

    Science.gov (United States)

    Luca, Loredana; Rogobete, Alexandru Florin; Bedreag, Ovidiu Horea; Sarandan, Mirela; Cradigati, Carmen Alina; Papurica, Marius; Gruneantu, Anelore; Patrut, Raluca; Vernic, Corina; Dumbuleu, Corina Maria; Sandesc, Dorel

    2015-01-01

    Objective Trauma patient requires a complex therapeutic management because of multiple severe injuries or secondary complications. The most significant injury found in patients with trauma is head injury, which has the greatest impact on mortality. Intracranial pressure (ICP) monitoring is required in severe traumatic head injury because it optimises treatment based on ICP values and cerebral perfusion pressure (CPP). Methods From a total of 64 patients admitted in the intensive care unit (ICU) ‘Casa Austria’, from the Polytraumatology Clinic of the Emergency County Hospital “Pius Brinzeu” Timisoara, Romania, between January 2014 and December 2014; only patients who underwent ICP monitoring (n=10) were analysed. The study population was divided into several categories depending on the time passed since trauma to the time of installation of ICP monitoring (24 h). Comparisons were made in terms of the number of days admitted in the ICU and mortality between patients with head injury who benefited and those who did not benefit from ICP monitoring. Results The results show the positive influence of ICP monitoring on the number of admission days in ICU because of the possibility that the number of admission days to augment therapeutic effects in patients who benefited from ICP monitoring reduces by 1.93 days compared with those who did not undergo ICP monitoring. Conclusion ICP monitoring and optimizing therapy according to the ICP and CPP has significant influence on the rate of survival. ICP monitoring is necessary in all patients with head trauma injury according to recent guidelines. The main therapeutic goal in the management of the trauma patient with head injury is to minimize the destructive effects of the associated side effects. PMID:27366538

  8. Isogeometric design optimisation

    NARCIS (Netherlands)

    Nagy, A.P.

    2011-01-01

    Design optimisation is of paramount importance in most engineering, e.g. aeronautical, automotive, or naval, disciplines. Its interdisciplinary character is manifested in the synthesis of geometric modelling, numerical analysis, mathematical programming, and computer sciences. The evolution of the f

  9. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  10. Optimisation of the LHCb detector

    CERN Document Server

    Hierck, R

    2003-01-01

    This thesis describes a comparison of the LHCb classic and LHCb light concept from a tracking perspective. The comparison includes the detector occupancies, the various pattern recognition algorithms and the reconstruction performance. The final optimised LHCb setup is used to study the physics performance of LHCb for the Bs->DsK and Bs->DsPi decay channels. This includes both the event selection and a study of the sensitivity for the Bs oscillation frequency, delta m_s, the Bs lifetime difference, DGamma_s, and the CP parameter gamma-2delta gamma.

  11. Optimisation of the LHCb detector

    CERN Document Server

    Hierck, R H

    2003-01-01

    This thesis describes a comparison of the LHCb classic and LHCb light concept from a tracking perspective. The comparison includes the detector occupancies, the various pattern recognition algorithms and the reconstruction performance. The final optimised LHCb setup is used to study the physics performance of LHCb for the Bs->DsK and Bs->DsPi decay channels. This includes both the event selection and a study of the sensitivity for the Bs oscillation frequency, delta m_s, the Bs lifetime difference, DGamma_s, and the CP parameter gamma-2delta gamma.

  12. Luxury uptake of phosphorus by microalgae in full-scale waste stabilisation ponds.

    Science.gov (United States)

    Powell, N; Shilton, A; Pratt, S; Chisti, Y

    2011-01-01

    Biological phosphorus removal was studied in two full-scale waste stabilisation ponds (WSP). Luxury uptake by microalgae was confirmed to occur and in one pond the biomass contained almost four times the phosphorus required by microalgae for normal metabolism. However, the phosphorus content within the biomass was variable. This finding means that assumptions made in prior publications on modelling of phosphorus removal in WSP are questionable. While fluctuations in microalgal growth causes variation in many water quality parameters, this further variation in luxury uptake explains the high degree of variability in phosphorus removal commonly reported in the literature. To achieve effective biological phosphorus removal high levels of both luxury uptake and microalgal concentration are needed. The findings of this work show that while high levels of these parameters did occur at times in the WSP monitored, they did not occur simultaneously. This is explained because accumulated phosphorus is subsequently consumed during rapid growth of biomass resulting in a high biomass concentration with a low phosphorus content. Previous laboratory research has allowed a number of key considerations to be proposed to optimise both luxury uptake and biomass concentration. Now that is has been shown that high levels of biomass concentration and luxury uptake can occur in the field it may be possible to redesign WSP to optimise these parameters.

  13. Uptake of indium-111 in the liver of mice following administration of indium-111-DTPA-labeled monoclonal antibodies: Influence of labeling parameters, physiologic parameters, and antibody dose

    Energy Technology Data Exchange (ETDEWEB)

    Schuhmacher, J.; Klivenyi, G.; Matys, R.; Kirchgebner, H.; Hauser, H.; Maier-Borst, W.; Matzku, S. (Institute of Radiology and Pathophysiology, Heidelberg (Germany, F.R.))

    1990-06-01

    Liver uptake of indium-111 ({sup 111}In) in mice was investigated following administration of {sup 111}In-DTPA murine monoclonal antibodies ({sup 111}In-DTPA-MAbs) labeled by the cyclic anhydride method. Biodistribution of HPLC-purified {sup 111}In-DTPA-MAb preparations was checked with a low (0.2 micrograms) and a high (8.0 micrograms) MAb dose. Using Bio Gel P-30 for desalting the MAb-conjugates, {sup 111}In uptake in the liver amounted to 8%-9% of the injected dose (ID) and was independent from the MAb dose, the DTPA-to-MAb molar ratio, tumor growth and biologic variability (different MAbs and different strains of mice). Using Sephadex G-25 for desalting, 0.2 micrograms doses from 7 out of 26 preparations showed increased liver accumulation of {sup 111}In in non-tumor mice ranging from 15%-25% of ID. Corresponding high doses led to a normal value of 8%-9%. Increased liver uptake of the low dose could not be reduced by coadministration of the unconjugated MAb, but was normal after reinjection of in vivo filtered material. An inverse intracellular distribution of {sup 111}In activity between sediment and supernatant of liver homogenates, following the administration of the low and the high MAb dose, indicated an artifact of the labeling procedure rather than an inherent biological property of labeled MAbs.

  14. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    Science.gov (United States)

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented

  15. Determination of operational parameters in waste incinerators as a prerequisite for further optimisation; Ermittlung von Betriebsparametern in Abfallverbrennungsanlagen als Voraussetzung fuer die weitere Optimierung

    Energy Technology Data Exchange (ETDEWEB)

    Horeni, M.; Beckmann, M. [Bauhaus-Univ. Weimar (Germany). Lehrstuhl fuer Verfahren und Umwelt; Fleischmann, H.; Barth, E. [AVA Abfallverwertung Augsburg GmbH, Augusburg (Germany)

    2007-07-01

    The comprehensive investigation of the current conditions in waste incinerators is of great importance for further optimization processes. Complex interactions between the substantial measured variables arise with these plants. Therefore, unknown operating parameters from operating measured values have to be determined because the determination of the actual effects of a certain optimization measure not always is possible. A substantial potential of investigation exists with the corrosion in waste incineration plants. The exhaustion of this investigation potential can be improved as apart from the measured values for the description of operating conditions further balanced operating parameters are used. Thus optimization measures can be derived. These optimization measures enable the total optimization of the plant regarding to energy efficiency, an increase of the throughput and a corrosion reduction.

  16. Determination of Important Parameters in Affecting the Uptake of Reactive Black 5 by Chitosan Beads through Statistical Approach

    Directory of Open Access Journals (Sweden)

    Yi-Pin Phung

    2013-01-01

    Full Text Available Chitosan which can be obtained from fishery waste was studied as an alternative source to remove pollutants in the wastewater. The adsorption process of Reactive Black 5 (RB5 by chitosan was studied under batch experimental condition to identify the optimum condition in which the dye can be removed at a higher rate. The best fit kinetics model was determined to be the pseudo-second-order kinetics. From the isotherm study, the experimental result was better explained by Freundlich isotherm. Plackett-Burman was employed to identify the influential variables affecting the dye uptake. Response surface methodology (RSM was used to determine the interactions between the factors and their optimum levels for maximum uptake of RB5. The optimum condition for the highest percentage uptake of RB5 dye was determined to be at pH 4, agitation rate of 200 rpm, sorbent dosage of 1.0 g, contact time of 300 minutes, and initial dye concentration of 25 mg/L.

  17. Simulation versus Optimisation

    DEFF Research Database (Denmark)

    Lund, Henrik; Arler, Finn; Østergaard, Poul Alberg

    2017-01-01

    investment optimisation or optimal solutions approach. On the other hand the analytical simulation or alternatives assessment approach. Awareness of the dissimilar theoretical assumption behind the models clarifies differences between the models, explains dissimilarities in results, and provides...... a theoretical and methodological foundation for understanding and interpreting results from the two archetypes. Keywords: energy system analysis; investment optimisation models; simulations models; modelling theory;renewable energy......In recent years, several tools and models have been developed and used for the design and analysis of future national energy systems. Many of these models focus on the integration of various renewable energy resources and the transformation of existing fossil-based energy systems into future...

  18. Optimising AspectJ

    DEFF Research Database (Denmark)

    Avgustinov, Pavel; Christensen, Aske Simon; Hendren, Laurie

    2005-01-01

    AspectJ, an aspect-oriented extension of Java, is becoming increasingly popular. However, not much work has been directed at optimising compilers for AspectJ. Optimising AOP languages provides many new and interesting challenges for compiler writers, and this paper identifies and addresses three...... all of the techniques in this paper in abc, our AspectBench Compiler for AspectJ, and we demonstrate significant speedups with empirical results. Some of our techniques have already been integrated into the production AspectJ compiler, ajc 1.2.1....

  19. Comparison between FDG Uptake and Clinicopathologic and Immunohistochemical Parameters in Pre-operative PET/CT Scan of Primary Gastric Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Han, Eun Ji; Choi, Woo Hee; Chung, Yong An; Kim, Ki Jun; Maeng, Lee So; Sohn, Kyung Myung; Jung, Hyun Suk; Sohn, Hyung Sun; Chung, Soo Kyo [College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of)

    2009-02-15

    The purpose of this study was to find out what clinicopathologic or immunohistochemical parameter that may affect FDG uptake of primary tumor in PET/CT scan of the gastric carcinoma patient. Eighty-nine patients with stomach cancer who underwent pre-operative FDG PET/CT scans were included. In cases with perceptible FDG uptake in primary tumor, the maximum standardized uptake value (SUVmax) was calculated. The clinicopathologic results such as depth of invasion (T stage), tumor size, lymph node metastasis, tumor differentiation and Lauren's classification and immunohistochemical markers such as Ki-67 index, expression of p53, EGFR, Cathepsin D, c-erb-B2 and COX-2 were reviewed. Nineteen out of 89 gastric carcinomas showed imperceptible FDG uptake on PET/CT images. In cases with perceptible FDG uptake in primary tumor, SUVmax was significantly higher in T2, T3 and T4 tumors than T1 tumors (5.8{+-}3.1 vs. 3.7{+-}2.1, p=0.002). SUVmax of large tumors (above or equal to 3 cm) was also significantly higher than SUVmax of small ones (less than 3 cm) (5.7{+-}3.2 vs. 3.7{+-}2.0, p=0.002). The intestinal types of gastric carcinomas according to Lauren showed higher FDG uptake compared to the non-intestinal types (5.4{+-}2.8 vs. 3.7{+-}1.3, p=0.003). SUVmax between p53 positive group and negative group was significantly different (6.0{+-}2.8 vs. 4.4{+-}3.0, p=0.035). No significant difference was found in presence of LN metastasis, tumor differentiation, Ki-67 index, and expression of EGFR, Cathepsin D, c-erb-B2 and COX-2. T stage of gastric carcinoma influenced the detectability of gastric cancer on FDG PET/CT scan. When gastric carcinoma was perceptible on PET/CT scan, T stage, size of primary tumor, Lauren's classification and p53 expression were related to degree of FDG uptake in primary tumor.

  20. Open Pit Optimisation and Design: A Stepwise Approach*

    African Journals Online (AJOL)

    Michael

    2015-12-02

    Dec 2, 2015 ... holes were used for the analysis. ... retrieval and analysis, using Surpac software. .... economic and technical parameters were used to produce a set of nested pits. Fig. 4 depicts a summarised flow chart for the pit optimisation.

  1. Weight Optimisation of Steel Monopile Foundations for Offshore Windfarms

    DEFF Research Database (Denmark)

    Fog Gjersøe, Nils; Bouvin Pedersen, Erik; Kristensen, Brian;

    2015-01-01

    The potential for mass reduction of monopiles in offshore windfarms using current design practice is investigated. Optimisation by sensitivity analysis is carried out for the following important parameters: wall thickness distribution between tower and monopile, soil stiffness, damping ratio and ...

  2. Optimisation of the geometry of the drill bit and process parameters for cutting hybrid composite/metal structures in new aircrafts

    Science.gov (United States)

    Isbilir, Ozden

    Owing to their desirable strength-to-weight characteristics, carbon fibre reinforced polymer composites have been favourite materials for structural applications in different industries such as aerospace, transport, sports and energy. They provide a weight reduction in whole structure and consequently decrease fuel consumption. The use of lightweight materials such as titanium and its alloys in modern aircrafts has also increased significantly in the last couple of decades. Titanium and its alloys offer high strength/weight ratio, high compressive and tensile strength at high temperatures, low density, excellent corrosion resistance, exceptional erosion resistance, superior fatigue resistance and relatively low modulus of elasticity. Although composite/metal hybrid structures are increasingly used in airframes nowadays, number of studies regarding drilling of composite/metal stacks is very limited. During drilling of multilayer materials different problems may arise due to very different attributes of these materials. Machining conditions of drilling such structures play an important role on tool wear, quality of holes and cost of machining.. The research work in this thesis is aimed to investigate drilling of CFRP/Ti6Al4V hybrid structure and to optimize process parameters and drill geometry. The research work consist complete experimental study including drilling tests, in-situ and post measurements and related analysis; and finite element analysis including fully 3-D finite element models. The experimental investigations focused on drilling outputs such as thrust force, torque, delamination, burr formation, surface roughness and tool wear. An algorithm was developed to analyse drilling induced delamination quantitatively based on the images. In the numerical analysis, novel 3-D finite element models of drilling of CFRP, Ti6Al4V and CFRP/Ti6Al4V hybrid structure were developed with the use of 3-D complex drill geometries. A user defined subroutine was developed

  3. Optimising Magnetostatic Assemblies

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Smith, Anders

    the optimal remanence distribution with respect to a linear objective functional. Additionally, it is shown here that the same formalism can be applied to the optimisation of the geometry of magnetic systems. Specifically, the border separating the permanent magnet from regions occupied by air or soft...

  4. Using response surface methodology in optimisation of biodiesel production via alkali catalysed transesterification of waste cooking oil

    CSIR Research Space (South Africa)

    Naidoo, R

    2016-03-01

    Full Text Available The report focuses on optimisation of alkali catalysis as a process for producing biodiesel from waste cooking oils. Biodiesel production parameters that were optimised were methanol to oil ratio, catalyst concentration, reaction temperature...

  5. Additive effects due to biochar and endophyte application enable soybean to enhance nutrient uptake and modulate nutritional parameters* #

    Science.gov (United States)

    Waqas, Muhammad; Kim, Yoon-Ha; Khan, Abdul Latif; Shahzad, Raheem; Asaf, Sajjad; Hamayun, Muhammad; Kang, Sang-Mo; Khan, Muhammad Aaqil; Lee, In-Jung

    2017-01-01

    We studied the effects of hardwood-derived biochar (BC) and the phytohormone-producing endophyte Galactomyces geotrichum WLL1 in soybean (Glycine max (L.) Merr.) with respect to basic, macro-and micronutrient uptakes and assimilations, and their subsequent effects on the regulation of functional amino acids, isoflavones, fatty acid composition, total sugar contents, total phenolic contents, and 1,1-diphenyl-2-picrylhydrazyl (DPPH)-scavenging activity. The assimilation of basic nutrients such as nitrogen was up-regulated, leaving carbon, oxygen, and hydrogen unaffected in BC+G. geotrichum-treated soybean plants. In comparison, the uptakes of macro-and micronutrients fluctuated in the individual or co-application of BC and G. geotrichum in soybean plant organs and rhizospheric substrate. Moreover, the same attribute was recorded for the regulation of functional amino acids, isoflavones, fatty acid composition, total sugar contents, total phenolic contents, and DPPH-scavenging activity. Collectively, these results showed that BC+G. geotrichum-treated soybean yielded better results than did the plants treated with individual applications. It was concluded that BC is an additional nutriment source and that the G. geotrichum acts as a plant biostimulating source and the effects of both are additive towards plant growth promotion. Strategies involving the incorporation of BC and endophytic symbiosis may help achieve eco-friendly agricultural production, thus reducing the excessive use of chemical agents. PMID:28124840

  6. Correlation of intra-tumor 18F-FDG uptake heterogeneity indices with perfusion CT derived parameters in colorectal cancer.

    Directory of Open Access Journals (Sweden)

    Florent Tixier

    Full Text Available Thirty patients with proven colorectal cancer prospectively underwent integrated 18F-FDG PET/DCE-CT to assess the metabolic-flow phenotype. Both CT blood flow parametric maps and PET images were analyzed. Correlations between PET heterogeneity and perfusion CT were assessed by Spearman's rank correlation analysis.Blood flow visualization provided by DCE-CT images was significantly correlated with 18F-FDG PET metabolically active tumor volume as well as with uptake heterogeneity for patients with stage III/IV tumors (|ρ|:0.66 to 0.78; p-value<0.02.The positive correlation found with tumor blood flow indicates that intra-tumor heterogeneity of 18F-FDG PET accumulation reflects to some extent tracer distribution and consequently indicates that 18F-FDG PET intra-tumor heterogeneity may be associated with physiological processes such as tumor vascularization.

  7. A methodological approach to the design of optimising control strategies for sewer systems

    DEFF Research Database (Denmark)

    Mollerup, Ane Loft; Mikkelsen, Peter Steen; Sin, Gürkan

    2016-01-01

    This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters. Accordin......This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters...... control; a rule based expert system. On the other hand, compared with a regulatory control technique designed earlier in Mollerup et al. (2015), the optimisation showed similar performance with respect to minimising overflow volume. Hence for operation of small sewer systems, regulatory control strategies...... can offer promising potential and should be considered along more advanced strategies when identifying novel solutions....

  8. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  9. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...

  10. Effect of the cigarette smoke component, 4-(methylnitrosamino-1-(3-pyridyl-1-butanone (NNK, on physiological and molecular parameters of thiamin uptake by pancreatic acinar cells.

    Directory of Open Access Journals (Sweden)

    Padmanabhan Srinivasan

    Full Text Available Thiamin is indispensable for the normal function of pancreatic acinar cells. These cells take up thiamin via specific carrier-mediated process that involves thiamin transporter-1 and -2 (THTR-1 and THTR-2; products of SLC19A2 and SLC19A3 genes, respectively. In this study we examined the effect of chronic exposure of pancreatic acinar cells in vitro (pancreatic acinar 266-6 cells and in vivo (wild-type and transgenic mice carrying the SLC19A2 and SLC19A3 promoters to the cigarette smoke component 4-(methylnitrosamino-1-(3-pyridyl-1-butanone (NNK on physiological and molecular parameters of the thiamin uptake process. The results show that chronic exposure of 266-6 cells to NNK (3 µM, 24 h leads to a significant inhibition in thiamin uptake. The inhibition was associated with a significant decrease in the level of expression of THTR-1 and -2 at the protein and mRNA levels as well as in the activity of SLC19A2 and SLC19A3 promoters. Similarly chronic exposure of mice to NNK (IP 10 mg/100 g body weight, three times/week for 2 weeks leads to a significant inhibition in thiamin uptake by freshly isolated pancreatic acinar cells, as well as in the level of expression of THTR-1 and -2 protein and mRNA. Furthermore, activity of the SLC19A2 and SLC19A3 promoters expressed in transgenic mice were significantly suppressed by chronic exposure to NNK. The effect of NNK on the activity of the SLC19A2 and SLC19A3 promoters was not mediated via changes in their methylation profile, rather it appears to be exerted via an SP1/GG and SP1/GC cis-regulatory elements in these promoters, respectively. These results demonstrate, for the first time, that chronic exposure of pancreatic acinar cells to NNK negatively impacts the physiological and molecular parameters of thiamin uptake by pancreatic acinar cells and that this effect is exerted, at least in part, at the level of transcription of the SLC19A2 and SLC19A3 genes.

  11. CO2 uptake and ecophysiological parameters of the grain crops of midcontinent North America: estimates from flux tower measurements

    Science.gov (United States)

    Gilmanov, Tagir; Wylie, Bruce; Tieszen, Larry; Meyers, Tilden P.; Baron, Vern S.; Bernacchi, Carl J.; Billesbach, David P.; Burba, George G.; Fischer, Marc L.; Glenn, Aaron J.; Hanan, Niall P.; Hatfield, Jerry L.; Heuer, Mark W.; Hollinger, Steven E.; Howard, Daniel M.; Matamala, Roser; Prueger, John H.; Tenuta, Mario; Young, David G.

    2013-01-01

    We analyzed net CO2 exchange data from 13 flux tower sites with 27 site-years of measurements over maize and wheat fields across midcontinent North America. A numerically robust “light-soil temperature-VPD”-based method was used to partition the data into photosynthetic assimilation and ecosystem respiration components. Year-round ecosystem-scale ecophysiological parameters of apparent quantum yield, photosynthetic capacity, convexity of the light response, respiration rate parameters, ecological light-use efficiency, and the curvature of the VPD-response of photosynthesis for maize and wheat crops were numerically identified and interpolated/extrapolated. This allowed us to gap-fill CO2 exchange components and calculate annual totals and budgets. VPD-limitation of photosynthesis was systematically observed in grain crops of the region (occurring from 20 to 120 days during the growing season, depending on site and year), determined by the VPD regime and the numerical value of the curvature parameter of the photosynthesis-VPD-response, σVPD. In 78% of the 27 site-years of observations, annual gross photosynthesis in these crops significantly exceeded ecosystem respiration, resulting in a net ecosystem production of up to 2100 g CO2 m−2 year−1. The measurement-based photosynthesis, respiration, and net ecosystem production data, as well as the estimates of the ecophysiological parameters, provide an empirical basis for parameterization and validation of mechanistic models of grain crop production in this economically and ecologically important region of North America.

  12. How to apply the Score-Function method to standard discrete event simulation tools in order to optimise a set of system parameters simultaneously: A Job-Shop example will be discussed

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ......During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging...... if the gradients are unbiased, the SA-algorithm will be known as a Robbins-Monro-algorithm. The present work will focus on the SF method and show how to migrate it to general types of discrete event simulation systems, in this case represented by SIMNET II, and discuss how the optimisation of the functioning...... of a Job-Shop can be handled by the SF method....

  13. Multiobjective optimisation of bogie suspension to boost speed on curves

    Science.gov (United States)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  14. Optimisation of the Sekwa blended-wing-Body research UAV

    CSIR Research Space (South Africa)

    Broughton, BA

    2008-10-01

    Full Text Available candidate design to bring the total mass up to the target total mass of 3.2 kg. The location of the ballast mass could be adjusted by the design code, which allowed the static margin to be used as a design variable. Finally, a series of checks were.... Overview of optimisation process OPTIMISER Genetic Algorithms + Gradient Based Methods Natural FQ constraints Geometric constraints Control system constraints Stall behaviour Design with best cruise performance Design parameters Generate...

  15. Comparison of two organic fertilizers along with Zn and B elements on concentration, uptake of nutrients and some growth parameters in millet (Panicum miliaceum L.

    Directory of Open Access Journals (Sweden)

    T Nezhad hoseini

    2016-05-01

    Full Text Available A field experiment was conducted to study the effect of two organic fertilizers along with zinc and boron elements on some growth parameters, concentration and uptake of nutrients in millet (Panicum miliaceum L. by using factorial based on randomized completely block design with three replications in Qaen region, Iran. The main treatments were municipal solid waste compost and cow manure (each at 0 and 25 t.ha-1 and sub treatments were elements of Zn (0, 50 kg.ha-1 and B (0, 10 kg.ha-1 using their respective ZnSO4 and H3BO3 salts. Results showed that treatments interaction had significant effects on total dry matter yield, number of tillers per plant and plant height of Millet. The highest total dry matter production was achieved by interaction of cow manure along with Zn and B elements. Concentrations of N, Fe, Zn, B and Cu in plant were increased significantly by treatments interaction effects compared to control. Interaction effect of organic fertilizers with B (in the absence of Zn enhanced plant B concentration significantly, whereas, interaction of organic fertilizers with Zn (in the absence of B decreased B concentration in plant. The highest plant uptake of N, P, K, Zn, and B was observed in plots with cow manure and Zn and B elements.

  16. How to apply the Score-Function method to standard discrete event simulation tools in order to optimise a set of system parameters simultaneously: A Job-Shop example will be discussed

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    if the gradients are unbiased, the SA-algorithm will be known as a Robbins-Monro-algorithm. The present work will focus on the SF method and show how to migrate it to general types of discrete event simulation systems, in this case represented by SIMNET II, and discuss how the optimisation of the functioning...

  17. Optimisation of Microstrip Antenna

    Directory of Open Access Journals (Sweden)

    H. El Hamchary

    1996-04-01

    Full Text Available When choosing the most appropriate microstrip antenna configuration for particular applications, the kind of excitation of the radiating element is an essential factor that requires careful considerations. For controlling the distribution of energy of the linear or planar array of elements and for coupling energy to the individual elements, a wide variety of feed mechanisms are available. In this paper, the coaxial antenna feeding is assumed and the best (optimised feeding is found. Then, antenna characteristics such as radiation pattern, return loss, input impedance, and VSWR are obtained.

  18. A Global Optimisation Toolbox for Massively Parallel Engineering Optimisation

    CERN Document Server

    Biscani, Francesco; Yam, Chit Hong

    2010-01-01

    A software platform for global optimisation, called PaGMO, has been developed within the Advanced Concepts Team (ACT) at the European Space Agency, and was recently released as an open-source project. PaGMO is built to tackle high-dimensional global optimisation problems, and it has been successfully used to find solutions to real-life engineering problems among which the preliminary design of interplanetary spacecraft trajectories - both chemical (including multiple flybys and deep-space maneuvers) and low-thrust (limited, at the moment, to single phase trajectories), the inverse design of nano-structured radiators and the design of non-reactive controllers for planetary rovers. Featuring an arsenal of global and local optimisation algorithms (including genetic algorithms, differential evolution, simulated annealing, particle swarm optimisation, compass search, improved harmony search, and various interfaces to libraries for local optimisation such as SNOPT, IPOPT, GSL and NLopt), PaGMO is at its core a C++ ...

  19. Simple Combinatorial Optimisation Cost Games

    NARCIS (Netherlands)

    van Velzen, S.

    2005-01-01

    In this paper we introduce the class of simple combinatorial optimisation cost games, which are games associated to {0, 1}-matrices.A coalitional value of a combinatorial optimisation game is determined by solving an integer program associated with this matrix and the characteristic vector of the

  20. Elliptical Antenna Array Synthesis Using Backtracking Search Optimisation Algorithm

    Directory of Open Access Journals (Sweden)

    Kerim Guney

    2016-04-01

    Full Text Available The design of the elliptical antenna arrays is relatively new research area in the antenna array community. Backtracking search optimisation algorithm (BSA is employed for the synthesis of elliptical antenna arrays having different number of array elements. For this aim, BSA is used to calculate the optimum angular position and amplitude values of the array elements. BSA is a population-based iterative evolutionary algorithm. The remarkable properties of BSA are that it has a good optimisation performance, simple implementation structure, and few control parameters. The results of BSA are compared with those of self-adaptive differential evolution algorithm, firefly algorithm, biogeography based optimisation algorithm, and genetic algorithm. The results show that BSA can reach better solutions than the compared optimisation algorithms. Iterative performances of BSA are also compared with those of bacterial foraging algorithm and differential search algorithm.

  1. Optimisation of the Nonlinear Suspension Characteristics of a Light Commercial Vehicle

    Directory of Open Access Journals (Sweden)

    Dinçer Özcan

    2013-01-01

    Full Text Available The optimum functional characteristics of suspension components, namely, linear/nonlinear spring and nonlinear damper characteristic functions are determined using simple lumped parameter models. A quarter car model is used to represent the front independent suspension, and a half car model is used to represent the rear solid axle suspension of a light commercial vehicle. The functional shapes of the suspension characteristics used in the optimisation process are based on typical shapes supplied by a car manufacturer. The complexity of a nonlinear function optimisation problem is reduced by scaling it up or down from the aforementioned shape in the optimisation process. The nonlinear optimised suspension characteristics are first obtained using lower complexity lumped parameter models. Then, the performance of the optimised suspension units are verified using the higher fidelity and more realistic Carmaker model. An interactive software module is developed to ease the nonlinear suspension optimisation process using the Matlab Graphical User Interface tool.

  2. ATLAS software configuration and build tool optimisation

    Science.gov (United States)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of

  3. Topology optimised wavelength dependent splitters

    DEFF Research Database (Denmark)

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  4. Engineering Optimisation by Cuckoo Search

    CERN Document Server

    Yang, Xin-She

    2010-01-01

    A new metaheuristic optimisation algorithm, called Cuckoo Search (CS), was developed recently by Yang and Deb (2009). This paper presents a more extensive comparison study using some standard test functions and newly designed stochastic test functions. We then apply the CS algorithm to solve engineering design optimisation problems, including the design of springs and welded beam structures. The optimal solutions obtained by CS are far better than the best solutions obtained by an efficient particle swarm optimiser. We will discuss the unique search features used in CS and the implications for further research.

  5. Topology optimised wavelength dependent splitters

    DEFF Research Database (Denmark)

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn;

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  6. Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  7. Optimisation of fertiliser rates in crop production against energy use indicators

    DEFF Research Database (Denmark)

    Rossner, Helis; Ritz, Christian; Astover, Alar

    2014-01-01

    Optimising mineral nitrogen (N) use in crop production is inevitable target as mineral fertilisers reflectone of the highest inputs both in terms of economy and energy. The aim of the study was to comparethe relationship between the rate of N fertiliser application and different measures of energy.......05) optimisation. Both the new combined indices gave optimum N norms in between the rate ofER an EG. Composted cow manure background did not affect mineral N optimisation significantly. Wesuggest optimisation of mineral N according to bi-dimensional parameters as they capture important fea-tures of production...

  8. Isogeometric Analysis and Shape Optimisation

    DEFF Research Database (Denmark)

    Gravesen, Jens; Evgrafov, Anton; Gersborg, Allan Roulund

    obtained and also some of the problems we have encountered. One of these problems is that the geometry of the shape is given by the boundary alone. And, it is the parametrisation of the boundary which is changed by the optimisation procedure. But isogeometric analysis requires a parametrisation......One of the attractive features of isogeometric analysis is the exact representation of the geometry. The geometry is furthermore given by a relative low number of control points and this makes isogeometric analysis an ideal basis for shape optimisation. I will describe some of the results we have...... of the whole domain. So in every optimisation cycle we need to extend a parametrisation of the boundary of a domain to the whole domain. It has to be fast in order not to slow the optimisation down but it also has to be robust and give a parametrisation of high quality. These are conflicting requirements so we...

  9. Turbulence optimisation in stellarator experiments

    Energy Technology Data Exchange (ETDEWEB)

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  10. Parameter Estimation

    DEFF Research Database (Denmark)

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  11. Optimisation of electrical system for offshore wind farms via genetic algorithm

    DEFF Research Database (Denmark)

    Chen, Zhe; Zhao, Menghua; Blaabjerg, Frede

    2009-01-01

    An optimisation platform based on genetic algorithm (GA) is presented, where the main components of a wind farm and key technical specifications are used as input parameters and the electrical system design of the wind farm is optimised in terms of both production cost and system reliability...

  12. Optimisation of load control

    Energy Technology Data Exchange (ETDEWEB)

    Koponen, P. [VTT Energy, Espoo (Finland)

    1998-08-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  13. A conceptual optimisation strategy for radiography in a digital environment.

    Science.gov (United States)

    Båth, Magnus; Håkansson, Markus; Hansson, Jonny; Månsson, Lars Gunnar

    2005-01-01

    Using a completely digital environment for the entire imaging process leads to new possibilities for optimisation of radiography since many restrictions of screen/film systems, such as the small dynamic range and the lack of possibilities for image processing, do not apply any longer. However, at the same time these new possibilities lead to a more complicated optimisation process, since more freedom is given to alter parameters. This paper focuses on describing an optimisation strategy that concentrates on taking advantage of the conceptual differences between digital systems and screen/film systems. The strategy can be summarised as: (a) always include the anatomical background during the optimisation, (b) perform all comparisons at a constant effective dose and (c) separate the image display stage from the image collection stage. A three-step process is proposed where the optimal setting of the technique parameters is determined at first, followed by an optimisation of the image processing. In the final step the optimal dose level-given the optimal settings of the image collection and image display stages-is determined.

  14. Optimising Microbial Growth with a Bench-Top Bioreactor

    Science.gov (United States)

    Baker, A. M. R.; Borin, S. L.; Chooi, K. P.; Huang, S. S.; Newgas, A. J. S.; Sodagar, D.; Ziegler, C. A.; Chan, G. H. T.; Walsh, K. A. P.

    2006-01-01

    The effects of impeller size, agitation and aeration on the rate of yeast growth were investigated using bench-top bioreactors. This exercise, carried out over a six-month period, served as an effective demonstration of the importance of different operating parameters on cell growth and provided a means of determining the optimisation conditions…

  15. Adaptive optimisation of a generalised beam shaping system

    DEFF Research Database (Denmark)

    Kenny, F.; Choi, F. S.; Glückstad, Jesper

    2015-01-01

    filter were generated by the SLM. This provided extra flexibility and control over the parameters of the system including the phase step magnitude, shape, radius and position of the filter. A feedback method for the on-line optimisation of these properties was also developed. Using feedback from images...

  16. Numerical optimisation of friction stir welding: review of future challenges

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Hattel, Jesper Henri

    2011-01-01

    During the last decade, the combination of increasingly more advanced numerical simulation software with high computational power has resulted in models for friction stir welding (FSW), which have improved the understanding of the determining physical phenomena behind the process substantially....... This has made optimisation of certain process parameters possible and has in turn led to better performing friction stir welded products, thus contributing to a general increase in the popularity of the process and its applications. However, most of these optimisation studies do not go well beyond manual...

  17. TEM turbulence optimisation in stellarators

    CERN Document Server

    Proll, J H E; Xanthopoulos, P; Lazerson, S A; Faber, B J

    2015-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is adressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X [C.D. Beidler $\\textit{et al}$ Fusion Technology $\\bf{17}$, 148 (1990)] and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT [D.A. Spong $\\textit{et al}$ Nucl. Fusion $\\bf{41}$, 711 (2001)] code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stella...

  18. Fabrication optimisation of carbon fiber electrode with Taguchi method.

    Science.gov (United States)

    Cheng, Ching-Ching; Young, Ming-Shing; Chuang, Chang-Lin; Chang, Ching-Chang

    2003-07-01

    In this study, we describe an optimised procedure for fabricating carbon fiber electrodes using Taguchi quality engineering method (TQEM). The preliminary results show a S/N ratio improvement from 22 to 30 db (decibel). The optimised parameter was tested by using a glass micropipette (0.3 mm outer/2.5 mm inner length of carbon fiber) dipped into PBS solution under 2.9 V triangle-wave electrochemical processing for 15 s, followed by coating treatment of micropipette on 2.6 V DC for 45 s in 5% Nafion solution. It is thus shown that Taguchi process optimisation can improve cost, manufacture time and quality of carbon fiber electrodes.

  19. Optimisation of sampling windows design for population pharmacokinetic experiments.

    Science.gov (United States)

    Ogungbenro, Kayode; Aarons, Leon

    2008-08-01

    This paper describes an approach for optimising sampling windows for population pharmacokinetic experiments. Sampling windows designs are more practical in late phase drug development where patients are enrolled in many centres and in out-patient clinic settings. Collection of samples under the uncontrolled environment at these centres at fixed times may be problematic and can result in uninformative data. Population pharmacokinetic sampling windows design provides an opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses information obtained from previous experiments about the model and parameter estimates to optimise sampling windows for population pharmacokinetic experiments within a space of admissible sampling windows sequences. The optimisation is based on a continuous design and in addition to sampling windows the structure of the population design in terms of the proportion of subjects in elementary designs, number of elementary designs in the population design and number of sampling windows per elementary design is also optimised. The results obtained showed that optimal sampling windows designs obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected. The results obtained also showed that the generalized equivalence theorem holds for this approach.

  20. Eyes Wide Open - Optimising Cosmological Surveys in a Crowded Market

    CERN Document Server

    Bassett, B A

    2004-01-01

    Optimising the major next-generation cosmological surveys (such as SNAP, KAOS etc...) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximises the discrimination power of a survey without assuming any underlying dark energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximises the cross-section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as `is dark energy dynamical?'). Integrated Parameter Space Optimisation (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremises a figure of merit (such as Shannon entropy gain wich we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. IPSO is thus a flexible, model-independent and scal...

  1. ATLAS software configuration and build tool optimisation

    CERN Document Server

    Rybkin, G; The ATLAS collaboration

    2014-01-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of re...

  2. ATLAS software configuration and build tool optimisation

    CERN Document Server

    Rybkin, G; The ATLAS collaboration

    2013-01-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of re...

  3. A code for optimising triplet layout

    CERN Document Server

    Van Riesen-Haupt, Leon; Abelleira, Jose; Cruz Alaniz, Emilia

    2017-01-01

    One of the main challenges when designing final focus systems of particle accelerators is maximising the beam stay clear in the strong quadrupole magnets of the inner triplet. Moreover it is desirable to keep the quadrupoles in the inner triplet as short as possible for space and costs reasons but also to reduce chromaticity and simplify corrections schemes. An algorithm that explores the triplet parameter space to optimise both these aspects was written. It uses thin lenses as a first approximation for a broad parameter scan and MADX for more precise calculations. The thin lens algorithm is significantly faster than a full scan using MADX and relatively precise at indicating the approximate area where the optimum solution lies.

  4. Cogeneration technologies, optimisation and implementation

    CERN Document Server

    Frangopoulos, Christos A

    2017-01-01

    Cogeneration refers to the use of a power station to deliver two or more useful forms of energy, for example, to generate electricity and heat at the same time. This book provides an integrated treatment of cogeneration, including a tour of the available technologies and their features, and how these systems can be analysed and optimised.

  5. Optimisation of solar synoptic observations

    Science.gov (United States)

    Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal

    2012-09-01

    The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.

  6. For Time-Continuous Optimisation

    DEFF Research Database (Denmark)

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  7. For Time-Continuous Optimisation

    DEFF Research Database (Denmark)

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  8. PHYSICAL-MATEMATICALSCIENCE MECHANICS SIMULATION CHALLENGES IN OPTIMISING THEORETICAL METAL CUTTING TASKS

    Directory of Open Access Journals (Sweden)

    Rasul V. Guseynov

    2017-01-01

    Full Text Available Abstract. Objectives In the article, problems in the optimising of machining operations, which provide end-unit production of the required quality with a minimum processing cost, are addressed. Methods Increasing the effectiveness of experimental research was achieved through the use of mathematical methods for planning experiments for optimising metal cutting tasks. The minimal processing cost model, in which the objective function is polynomial, is adopted as a criterion for the selection of optimal parameters. Results Polynomial models of the influence of angles φ, α, γ on the torque applied when cutting threads in various steels are constructed. Optimum values of the geometrical tool parameters were obtained using the criterion of minimum cutting forces during processing. The high stability of tools having optimal geometric parameters is determined. It is shown that the use of experimental planning methods allows the optimisation of cutting parameters. In optimising solutions to metal cutting problems, it is found to be expedient to use multifactor experimental planning methods and to select the cutting force as the optimisation parameter when determining tool geometry. Conclusion The joint use of geometric programming and experiment planning methods in order to optimise the parameters of cutting significantly increases the efficiency of technological metal processing approaches. 

  9. Optimising Antibiotic Usage to Treat Bacterial Infections

    Science.gov (United States)

    Paterson, Iona K.; Hoyle, Andy; Ochoa, Gabriela; Baker-Austin, Craig; Taylor, Nick G. H.

    2016-11-01

    The increase in antibiotic resistant bacteria poses a threat to the continued use of antibiotics to treat bacterial infections. The overuse and misuse of antibiotics has been identified as a significant driver in the emergence of resistance. Finding optimal treatment regimens is therefore critical in ensuring the prolonged effectiveness of these antibiotics. This study uses mathematical modelling to analyse the effect traditional treatment regimens have on the dynamics of a bacterial infection. Using a novel approach, a genetic algorithm, the study then identifies improved treatment regimens. Using a single antibiotic the genetic algorithm identifies regimens which minimise the amount of antibiotic used while maximising bacterial eradication. Although exact treatments are highly dependent on parameter values and initial bacterial load, a significant common trend is identified throughout the results. A treatment regimen consisting of a high initial dose followed by an extended tapering of doses is found to optimise the use of antibiotics. This consistently improves the success of eradicating infections, uses less antibiotic than traditional regimens and reduces the time to eradication. The use of genetic algorithms to optimise treatment regimens enables an extensive search of possible regimens, with previous regimens directing the search into regions of better performance.

  10. Optimising Optimal Image Subtraction

    CERN Document Server

    Israel, H; Schuh, S; Israel, Holger; Hessman, Frederic V.; Schuh, Sonja

    2006-01-01

    Difference imaging is a technique for obtaining precise relative photometry of variable sources in crowded stellar fields and, as such, constitutes a crucial part of the data reduction pipeline in surveys for microlensing events or transiting extrasolar planets. The Optimal Image Subtraction (OIS) algorithm permits the accurate differencing of images by determining convolution kernels which, when applied to reference images of particularly good quality, provide excellent matches to the point-spread functions (PSF) in other images of the time series to be analysed. The convolution kernels are built as linear combinations of a set of basis functions, conventionally bivariate Gaussians modulated by polynomials. The kernel parameters must be supplied by the user and should ideally be matched to the PSF, pixel-sampling, and S/N of the data to be analysed. We have studied the outcome of the reduction as a function of the kernel parameters using our implementation of OIS within the TRIPP package. From the analysis o...

  11. Electrophysiological approach to determine kinetic parameters of sucrose uptake by single sieve elements or phloem parenchyma cells in intact Vicia faba plants

    Directory of Open Access Journals (Sweden)

    Jens B. Hafke

    2013-07-01

    Full Text Available Apart from a few using cut aphid stylets, no attempts have been made thus far to measure in vivo sucrose-uptake properties of sieve elements. We investigated the kinetics of sucrose uptake by single sieve elements and phloem parenchyma cells in Vicia faba plants. To this end, microelectrodes were inserted into free-lying phloem cells in the main vein of the youngest fully-expanded leaf, half-way along the stem, in the transition zone between the autotrophic and heterotrophic part of the stem, and in the root axis. A top-to-bottom membrane potential gradient of sieve elements was observed along the stem (-130 mV to -110 mV, while the membrane potential of the phloem parenchyma cells was stable (approx. -100 mV. In roots, the membrane potential of sieve elements dropped abruptly to -55 mV. Bathing solutions having various sucrose concentrations were administered and sucrose/H+-induced depolarisations were recorded. Data analysis by nonlinear least-square data fittings as well as by linear Eadie-Hofstee (EH -transformations pointed at biphasic Michaelis-Menten kinetics (2 MM, EH: Km1 1.2-1.8 mM, Km2 6.6-9.0 mM of sucrose uptake by sieve elements. However, Akaike’s Information Criterion (AIC favoured single MM kinetics. Using single MM as the best-fitting model, Km values for sucrose uptake by sieve elements decreased along the plant axis from 1 to 7 mM. For phloem parenchyma cells, higher Km values (EH: Km1 10 mM, Km2 70 mM as compared to sieve elements were found. In preliminary patch-clamp experiments with sieve-element protoplasts, small sucrose-coupled proton currents (-0.1 to -0.3 pA/ pF were detected in the whole-cell mode. In conclusion (a Km values for sucrose uptake measured by electrophysiology are similar to those obtained with heterologous systems, (b electrophysiology provides a useful tool for in-situ determination of Km values, (c As yet, it remains unclear if one or two uptake systems are involved in sucrose uptake by sieve

  12. Optimizing {sup 18}F-FDG PET/CT imaging of vessel wall inflammation: the impact of {sup 18}F-FDG circulation time, injected dose, uptake parameters, and fasting blood glucose levels

    Energy Technology Data Exchange (ETDEWEB)

    Bucerius, Jan [Icahn School of Medicine at Mount Sinai, Translational and Molecular Imaging Institute, One Gustave L. Levy Place, P.O. Box 1234, New York, NY (United States); Mount Sinai School of Medicine, Department of Radiology, New York, NY (United States); Maastricht University Medical Center, Department of Nuclear Medicine, Maastricht (Netherlands); Maastricht University Medical Center, Cardiovascular Research Institute Maastricht (CARIM), Maastricht (Netherlands); University Hospital, RWTH Aachen, Department of Nuclear Medicine, Aachen (Germany); Mani, Venkatesh; Fayad, Zahi A. [Icahn School of Medicine at Mount Sinai, Translational and Molecular Imaging Institute, One Gustave L. Levy Place, P.O. Box 1234, New York, NY (United States); Mount Sinai School of Medicine, Department of Radiology, New York, NY (United States); Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Moncrieff, Colin [Icahn School of Medicine at Mount Sinai, Translational and Molecular Imaging Institute, One Gustave L. Levy Place, P.O. Box 1234, New York, NY (United States); Mount Sinai School of Medicine, Department of Radiology, New York, NY (United States); Machac, Josef [Mount Sinai School of Medicine, Division of Nuclear Medicine, Department of Radiology, New York, NY (United States); Fuster, Valentin [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); The Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid (Spain); Farkouh, Michael E. [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Mount Sinai School of Medicine, Cardiovascular Imaging Clinical Trials Unit, New York, NY (United States); Tawakol, Ahmed [Massachusetts General Hospital, Harvard University, Cardiac MR PET CT Program, Boston, MA (United States); Rudd, James H.F. [Cambridge University, Division of Cardiovascular Medicine, Cambridge (United Kingdom)

    2014-02-15

    {sup 18}F-FDG PET is increasingly used for imaging of vessel wall inflammation. However, limited data are available on the impact of methodological variables, i.e. prescan fasting glucose, FDG circulation time and injected FDG dose, and of different FDG uptake parameters, in vascular FDG PET imaging. Included in the study were 195 patients who underwent vascular FDG PET/CT of the aorta and the carotids. Arterial standardized uptake values ({sub mean}SUV{sub max}), target-to-background ratios ({sub mean}TBR{sub max}) and FDG blood-pool activity in the superior vena cava (SVC) and the jugular veins (JV) were quantified. Vascular FDG uptake values classified according to the tertiles of prescan fasting glucose levels, the FDG circulation time, and the injected FDG dose were compared using ANOVA. Multivariate regression analyses were performed to identify the potential impact of all variables described on the arterial and blood-pool FDG uptake. Tertile analyses revealed FDG circulation times of about 2.5 h and prescan glucose levels of less than 7.0 mmol/l, showing a favorable relationship between arterial and blood-pool FDG uptake. FDG circulation times showed negative associations with aortic{sub mean}SUV{sub max} values as well as SVC and JV FDG blood-pool activity, but positive correlations with aortic and carotid{sub mean}TBR{sub max} values. Prescan glucose levels were negatively associated with aortic and carotid{sub mean}TBR{sub max} and carotid{sub mean}SUV{sub max} values, but were positively correlated with SVC blood-pool uptake. The injected FDG dose failed to show any significant association with vascular FDG uptake. FDG circulation times and prescan blood glucose levels significantly affect FDG uptake in the aortic and carotid walls and may bias the results of image interpretation in patients undergoing vascular FDG PET/CT. The injected FDG dose was less critical. Therefore, circulation times of about 2.5 h and prescan glucose levels less than 7.0 mmol

  13. Wear/comfort Pareto optimisation of bogie suspension

    Science.gov (United States)

    Milad Mousavi Bideleh, Seyed; Berbyuk, Viktor; Persson, Rickard

    2016-08-01

    Pareto optimisation of bogie suspension components is considered for a 50 degrees of freedom railway vehicle model to reduce wheel/rail contact wear and improve passenger ride comfort. Several operational scenarios including tracks with different curve radii ranging from very small radii up to straight tracks are considered for the analysis. In each case, the maximum admissible speed is applied to the vehicle. Design parameters are categorised into two levels and the wear/comfort Pareto optimisation is accordingly accomplished in a multistep manner to improve the computational efficiency. The genetic algorithm (GA) is employed to perform the multi-objective optimisation. Two suspension system configurations are considered, a symmetric and an asymmetric in which the primary or secondary suspension elements on the right- and left-hand sides of the vehicle are not the same. It is shown that the vehicle performance on curves can be significantly improved using the asymmetric suspension configuration. The Pareto-optimised values of the design parameters achieved here guarantee wear reduction and comfort improvement for railway vehicles and can also be utilised in developing the reference vehicle models for design of bogie active suspension systems.

  14. Hierarchical optimisation on scissor seat suspension characteristic and structure

    Science.gov (United States)

    Wang, Chunlei; Zhang, Xinjie; Guo, Konghui; Lv, Jiming; Yang, Yi

    2016-11-01

    Scissor seat suspension has been applied widely to attenuate the cab vibrations of commercial vehicles, while its design generally needs a trade-off between the seat acceleration and suspension travel, which creates a typical optimisation issue. A complexity for this issue is that the optimal dynamics parameters are not easy to approach solutions fast and unequivocally. Hence, the hierarchical optimisation on scissor seat suspension characteristic and structure is proposed, providing a top-down methodology with the globally optimal and fast convergent solutions to compromise these design contradictions. In details, a characteristic-oriented non-parametric dynamics model of the scissor seat suspension is formulated firstly via databases, describing its vertical dynamics accurately. Then, the ideal vertical stiffness-damping characteristic is cascaded via the characteristic-oriented model, and the structure parameters are optimised in accordance with a structure-oriented multi-body dynamics model of the scissor seat suspension. Eventually, the seat effective amplitude transmissibility factor, suspension travel and the CPU time for solving are evaluated. The results show the seat suspension performance and convergent speed of the globally optimal solutions are improved well. Hence, the proposed hierarchical optimisation methodology regarding characteristic and structure of the scissor seat suspension is promising for its virtual development.

  15. Modelling and Optimising TinyTP over IrDA Stacks

    Directory of Open Access Journals (Sweden)

    Boucouvalas A. C.

    2005-01-01

    Full Text Available TinyTP is the IrDA transport layer protocol for indoor infrared communications. For the first time, this paper presents a mathematical model for TinyTP over the IrDA protocol stacks taking into account the presence of bit errors. Based on this model, we carry out a comprehensive optimisation study to improve system performance at the transport layer. Four major parameters are optimised for maximum throughput including TinyTP receiver window, IrLAP window and frame size, as well as IrLAP turnaround time. Equations are derived for the optimum IrLAP window and frame sizes. Numerical results show that the system throughput is significantly improved by implementing the optimised parameters. The major contribution of this work is the modelling of TinyTP including the low-layer protocols and optimisation of the overall throughput by appropriate parameter selection.

  16. Optimisation of beam-orientations in conformal radiotherapy treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, C.G

    1999-07-01

    Many synergistic advances have led to the beginnings of the routine use of conformal radiotherapy. These include advances in diagnostic imaging, in 3D treatment planning, in the technology for complex treatment delivery and in computer assessment of rival treatment plans. A conformal radiotherapy treatment plan more closely conforms the high-dose volume to the target volume, reducing the dose to normal healthy tissue. Traditionally, human planners have devised the treatment parameters used in radiotherapy treatment plans via a manually iterative process. Computer 'optimisation' algorithms have been shown to improve treatment plans as they can explore much more of the search space in a relatively short time. This thesis examines beam-orientation computer 'optimisation' in radiotherapy treatment planning and several new techniques were developed. Using these techniques a comparison was performed between treatment plans with 'standard', fixed beam-orientations and treatment plans with 'optimised' beam-orientations for patients with cancer of the prostate, oesophagus and brain. Plans were compared on the basis of dose-distributions and in some cases biological models for the probability of damage to the target volume and the major organs-at-risk (OARs) in each patient group. A cohort of patients was considered in each group to avoid bias from a specific patient geometry. In the case of the patient cohort with cancer of the prostate, a coplanar beam-orientation 'optimisation' scheme led to an average increase in the TCP of (5.7{+-}1.4)% compared to the standard plans after the dose to the isocentre had been scaled to produce a rectal NTCP of 1%. For the patient cohort with cancer of the oesophagus, the beam-orientation 'optimisation' scheme reduced the average lung NTCP by (0.7{+-}0.2)% at the expense of a modest increase in the average spinal cord NTCP of (0.1{+-}0.2)%. A non-coplanar beam-orientation 'optimisation

  17. Niobium Cavity Electropolishing Modelling and Optimisation

    CERN Document Server

    Ferreira, L M A; Forel, S; Shirra, J A

    2013-01-01

    It’s widely accepted that electropolishing (EP) is the most suitable surface finishing process to achieve high performance bulk Nb accelerating cavities. At CERN and in preparation for the processing of the 704 MHz high-beta Superconducting Proton Linac (SPL) cavities a new vertical electropolishing facility has been assembled and a study is on-going for the modelling of electropolishing on cavities with COMSOL® software. In a first phase, the electrochemical parameters were taken into account for a fixed process temperature and flow rate, and are presented in this poster as well as the results obtained on a real SPL single cell cavity. The procedure to acquire the data used as input for the simulation is presented. The modelling procedure adopted to optimise the cathode geometry, aimed at a uniform current density distribution in the cavity cell for the minimum working potential and total current is explained. Some preliminary results on fluid dynamics is also briefly described.

  18. Optimisation of Multilayer Insulation an Engineering Approach

    CERN Document Server

    Chorowski, M; Parente, C; Riddone, G

    2001-01-01

    A mathematical model has been developed to describe the heat flux through multilayer insulation (MLI). The total heat flux between the layers is the result of three distinct heat transfer modes: radiation, residual gas conduction and solid spacer conduction. The model describes the MLI behaviour considering a layer-to-layer approach and is based on an electrical analogy, in which the three heat transfer modes are treated as parallel thermal impedances. The values of each of the transfer mode vary from layer to layer, although the total heat flux remains constant across the whole MLI blanket. The model enables the optimisation of the insulation with regard to different MLI parameters, such as residual gas pressure, number of layers and boundary temperatures. The model has been tested with experimental measurements carried out at CERN and the results revealed to be in a good agreement, especially for insulation vacuum between 10-5 Pa and 10-3 Pa.

  19. Common mode chokes and optimisation aspects

    Science.gov (United States)

    Kut, T.; Lücken, A.; Dickmann, S.; Schulz, D.

    2014-11-01

    Due to the increasing electrification of modern aircraft, as a result of the More Electric Aircraft concept, new strategies and approaches are required to fulfil the strict EMC aircraft standards (DO-160/ED-14-Sec. 20). Common mode chokes are a key component of electromagnetic filters and often oversized because of the unknown impedance of the surrounding power electronic system. This oversizing results in an increase of weight and volume. It has to be avoided as far as possible for mobile applications. In this context, an advanced method is presented to measure these impedances under operating conditions. Furthermore, the different parameters of the inductance design is explained and an optimisation for weight and volume is introduced.

  20. Optimising Code Generation with haggies

    OpenAIRE

    Reiter, Thomas

    2009-01-01

    This article describes haggies, a program for the generation of optimised programs for the efficient numerical evaluation of mathematical expressions. It uses a multivariate Horner-scheme and Common Subexpression Elimination to reduce the overall number of operations. The package can serve as a back-end for virtually any general purpose computer algebra program. Built-in type inference that allows to deal with non-standard data types in strongly typed languages and a very flexible, pattern-ba...

  1. Geometrical optimisation of a biochip microchannel fluidic separator.

    Science.gov (United States)

    Xue, Xiangdong; Patel, Mayur K; Bailey, Chris; Desmulliez, Marc P Y

    2012-01-01

    This article reports on the geometric optimisation of a T-shaped biochip microchannel fluidic separator aiming to maximise the separation efficiency of plasma from blood through the improvement of the unbalanced separation performance among different channel bifurcations. For this purpose, an algebraic analysis is firstly implemented to identify the key parameters affecting fluid separation. A numerical optimisation is then carried out to search the key parameters for improved separation performance of the biochip. Three parameters, the interval length between bifurcations, the main channel length from the outlet to the bifurcation region and the side channel geometry, are identified as the key characteristic sizes and defined as optimisation variables. A balanced flow rate ratio between the main and side channels, which is an indication of separation effectiveness, is defined as the objective. It is found that the degradation of the separation performance is caused by the unbalanced channel resistance ratio between the main and side channel routes from bifurcations to outlets. The effects of the three key parameters can be summarised as follows: (a) shortening the interval length between bifurcations moderately reduces the differences in the flow rate ratios; (b) extending the length of the main channel from the main outlet is effective for achieving a uniformity of flow rate ratio but ineffective in changing the velocity difference of the side channels and (c) decreasing the lengths of side channels from upstream to downstream is effective for both obtaining a uniform flow rate ratio and reducing the differences in the flow velocities between the side branch channels. An optimisation process combining the three parameters is suggested as this integration approach leads to fast convergent process and also offers flexible design options for satisfying different requirements.

  2. Optimisation of Kinematics for Tracked Vehicle Hydro Gas Suspension System

    Directory of Open Access Journals (Sweden)

    S. Sridhar

    2006-11-01

    Full Text Available The modern-day armoured fighting vehicles (AFVs are basically tracked vehicles equippedwith hydro gas suspensions, in lieu of conventional mechanical suspensions like torsion barand coil spring bogie suspensions. The uniqueness of hydro gas suspension is that it offersa nonlinear spring rate, which is very much required for the cross-country moveability of atracked vehicle. The AFVs have to negotiate different cross-country terrains like sandy, rocky,riverbed, etc. and the road irregularities provide enumerable problems during dynamic loadingsto the design of hydro gas suspension system. Optimising various design parameters demandsinnovative design methodologies to achieve better ride performance. Hence, a comprehensivekinematic analysis is needed. In this study, a methodology has been derived to optimise thekinematics of the suspension by reorienting the cylinder axis and optimising the loadtransferringleverage factor so that the side thrust on the cylinder is minimised to a greaterextent. The optimisation ultimately increases the life of the high-pressure and high-temperaturepiston seals, resulting in enhanced system life for better dependability.

  3. ON ACO-BASED ENERGY-AWARE ROUTING PROTOCOL FOR MOBILE AD HOC NETWORKS AND ITS PARAMETERS OPTIMISATION%基于蚁群优化的无线自组织网络能量感知路由协议与参数优化研究

    Institute of Scientific and Technical Information of China (English)

    任敬安; 涂亚庆; 张敏; 蒋银华

    2012-01-01

    蚁群优化ACO(Ant Colony Optimization)作为一种模拟进化算法,具有信息正反馈、分布式计算和多agent协同的特点,在求解复杂优化问题方面体现出许多优越性.提出基于ACO的无线自组织网络能量感知路由协议ABEAR(Ant-Based Energy-Aware Routing).协议按需发送人工蚂蚁进行路由发现,根据信息素浓度、节点能量和链路使用情况综合选择下一跳节点来转发数据包,尽量避开信道使用频率较高的路径,还可根据节点通信活动情况将空闲节点转入睡眠状态来节省能量消耗.由于蚁群参数的取值对于ACO算法的性能有着非常重要的影响,因此在分析三个关键参数(信息素挥发系数ρ、信息素权重因子α、剩余能量和链路拥塞指标权重因子β)对ABEAR性能的影响基础上,在NS2平台上进行了仿真实验,对参数优化的效果进行了对比,并总结出了参数值设定的具体步骤.%Ant colony optimisation is a simulated evolutionary algorithm which is characterised with a positive feedback, distributed computation and multi-agent synergy. It shows many advantages in solving complicated optimisation problems. This paper puts forward an ACO-based energy-aware routing protocol ( ABEAR) for mobile Ad Hoc networks. ABEAR sends out artificial ants to find paths to the destination node reactively, selects comprehensively the next hop to forward data packets based on the pheromone density, the nodes energy and the link usage situation. ABEAR tries hard to make channel avoid the paths highly occupied and can make idle node turn to sleeping state to conserve energy according to the communication situation of nodes. The selection on parameters of the ACO algorithm plays an important role for the performance of the algorithm, therefore, in this paper, the influence of three key parameters, the pheromone evaporating factor ρ, the weight of pheromone α and the weight of the remaining energy & link congestion metric

  4. An Optimisation Approach for Room Acoustics Design

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  5. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  6. Optimisation of brain SPET and portability of normal databases

    Energy Technology Data Exchange (ETDEWEB)

    Barnden, Leighton R.; Behin-Ain, Setayesh; Goble, Elizabeth A. [The Queen Elizabeth Hospital, Adelaide (Australia); Hatton, Rochelle L.; Hutton, Brian F. [Westmead Hospital, Sydney (Australia)

    2004-03-01

    Use of a normal database in quantitative regional analysis of brain single-photon emission tomography (SPET) facilitates the detection of functional defects in individual or group studies by accounting for inter-subject variability. Different reconstruction methods and suboptimal attenuation and scatter correction methods can introduce additional variance that will adversely affect such analysis. Similarly, processing differences across different instruments and/or institutions may invalidate the use of external normal databases. The object of this study was to minimise additional variance by comparing reconstructions of a physical phantom with its numerical template so as to optimise processing parameters. Age- and gender-matched normal scans acquired on two different systems were compared using SPM99 after processing with both standard and optimised parameters. For three SPET systems we have optimised parameters for attenuation correction, lower window scatter subtraction, reconstructed pixel size and fanbeam focal length for both filtered back-projection (FBP) and iterative (OSEM) reconstruction. Both attenuation and scatter correction improved accuracy for all systems. For single-iteration Chang attenuation correction the optimum attenuation coefficient (mu) was 0.45-0.85 of the narrow beam value (Nmu) before, and 0.75-0.85 Nmu after, scatter subtraction. For accurately modelled OSEM attenuation correction, optimum mu was 0.6-0.9 Nmu before and 0.9-1.1 Nmu after scatter subtraction. FBP appeared to change in-plane voxel dimensions by about 2% and this was confirmed by line phantom measurements. Improvement in accuracy with scatter subtraction was most marked for the highest spatial resolution system. Optimised processing reduced but did not remove highly significant regional differences between normal databases acquired on two different SPET systems. (orig.)

  7. 不同条件下小麦根系吸收菲的动力学参数变化%Changes in kinetic parameters of phenanthrene uptake by wheat roots under different conditions

    Institute of Scientific and Technical Information of China (English)

    陆守昆; 杨青青; 王红菊; 李金凤; 沈羽; 占新华

    2016-01-01

    Polycyclic aromatic hydrocarbons(PAHs)are ubiquitous contaminants in the environment. Dietary intake of plant-based foods has become a major contribution to the total exposure of PAHs for the human beings. Because of well-documented carcinogenicity, muta-genicity and toxicity of PAHs to humans, it is of great importance to establish a model of PAH uptake by crop roots for health risk assess-ments of human exposure to PAHs. However, little information is available regarding such models of PAH uptake by crop roots including ac-tive transport. Here hydroponic experiments were employed to investigate the kinetic parameters of phenanthrene uptake by wheat roots un-der different temperatures, pHs, and wheat seedling ages. In a temperature range of 15~30 ℃, Km and V max increased with an increase in temperature. The relationship between Km or V max and temperature was well fitted with an exponential function. When the temperature was higher than 35 ℃, Km and V max decreased. At pH of 3.00~8.00, the affinity of phenanthrene to wheat root carrier proteins decreased with in-creasing pH. The V max was greatest at pH 5.50. However, the affinity of phenanthrene to wheat root carriers increased with seedling growth, but V max was just reverse. There was no significant difference in uptake kinetics between in vivo and in vitro wheat roots. Therefore, it was concluded that temperature, pH and seedling age have remarkable effects on kinetic parameters of phenanthrene uptake by wheat roots. Our results provide insights into both biochemical mechanisms underlying PAH uptake by plant roots and data for establishing model for PAH uptake by crop roots.%为揭示植物根系吸收多环芳烃(PAHs)的生物化学机制,建立疏水性有机污染物的植物根系吸收数学模型,并为科学预测农作物的污染风险提供依据,采用水培方法研究了不同温度、pH、培养时间以及地上部分去除与否条件下小麦根系吸收菲的动力学参数

  8. Optimising resource management in neurorehabilitation.

    Science.gov (United States)

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  9. Dose optimisation in single plane interstitial brachytherapy

    DEFF Research Database (Denmark)

    Tanderup, Kari; Hellebust, Taran Paulsen; Honoré, Henriette Benedicte;

    2006-01-01

    BACKGROUND AND PURPOSE: Brachytherapy dose distributions can be optimised       by modulation of source dwell times. In this study dose optimisation in       single planar interstitial implants was evaluated in order to quantify the       potential benefit in patients. MATERIAL AND METHODS: In 14...

  10. Self-optimising control of sewer systems

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane Loft;

    The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flows and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems...... to design an optimising control strategy for a subcathcment area in Copenhagen....

  11. An Optimisation Approach for Room Acoustics Design

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...

  12. Haemodynamic optimisation in lower limb arterial surgery

    DEFF Research Database (Denmark)

    Bisgaard, J; Gilsaa, T; Rønholm, E;

    2012-01-01

    index was optimised by administering 250 ml aliquots of colloid intraoperatively and during the first 6 h post-operatively. Following surgery, fluid optimisation was supplemented with dobutamine, if necessary, targeting an oxygen delivery index level ≥ 600 ml/min(/) m(2) in the intervention group...

  13. APPLICATION OF THE MULTIRESPONSE OPTIMISATION SIMPLEX METHOD TO THE BIODIESEL - B100 OBTAINING PROCESS

    Directory of Open Access Journals (Sweden)

    Julyane Karolyne Teixeira da Costa

    2016-03-01

    Full Text Available The process of obtaining B100 biodiesel from vegetable oil and animal fat mixtures by transesterification under basic conditions was optimised using the super-modified simplex method. For simultaneous optimisation, yield, cost, oxidative stability and Cold Filter Plugging Point (CFPP, were used as responses, and the limits were established according to the experimental data and the conformity parameters established by legislations. Based on the predictive equations obtained from the simplex-centroid design-coupled functions, the multi-response optimisation showed an optimal formulation containing 38.34 % soybean oil, 21.90 % beef tallow and 39.25 % poultry fat. The validation showed that there are no significant differences between the predicted and experimental values. The simplex-centroid mixture design and simplex optimisation methods were effective tools in obtaining biodiesel B100, using a mixture of different raw materials.

  14. Optimization of the scintillation parameters of the lead tungstate crystals for their application in high precision electromagnetic calorimetry; Optimisation des parametres de scintillation des cristaux de tungstate de plomb pour leur application dans la calorimetrie electromagnetique de haute precision

    Energy Technology Data Exchange (ETDEWEB)

    Drobychev, G

    2000-04-12

    In the frame of this dissertation work scintillation properties of the lead tungstate crystals (PWO) and possibilities of their use were studied foreseeing their application for electromagnetic calorimetry in extreme radiation environment conditions of new colliders. The results of this work can be summarized in the following way. 1. A model of the scintillations origin in the lead tungstate crystals which includes processes influencing on the crystals radiation hardness and presence of slow components in scintillations was developed. 2. An analysis of the influences of the PWO scintillation properties changes on the parameters of the electromagnetic calorimeter was done. 3. Methods of the light collection from the large scintillation elements of complex shape made of the birefringent scintillation crystal with high refraction index and low light yield in case of signal registration by a photodetector with sensitive surface small in compare with the output face of scintillator were Studied. 4. Physical principles of the methodology of the scintillation crystals certification during their mass production foreseeing their installation into a calorimeter electromagnetic were developed. Correlations between the results of measurements of the PWO crystals parameters by different methods were found. (author)

  15. Comparison of cisplatin sensitivity and the 18F fluoro-2-deoxy 2 glucose uptake with proliferation parameters and gene expression in squamous cell carcinoma cell lines of the head and neck

    Directory of Open Access Journals (Sweden)

    Ohlsson Tomas

    2009-02-01

    Full Text Available Abstract Background The survival of patients with locally advanced head and neck cancer is still poor, with 5-year survival rates of 24–35%. The identification of prognostic and predictive markers at the molecular and cellular level could make it possible to find new therapeutic targets and provide "taylor made" treatments. Established cell lines of human squamous cell carcinoma (HNSCC are valuable models for identifying such markers. The aim of this study was to establish and characterize a series of cell lines and to compare the cisplatin sensitivity and 18F fluoro-2 deoxy 2 glucose (18F-FDG uptake of these cell lines with other cellular characteristics, such as proliferation parameters and TP53 and CCND1 status. Methods Explant cultures of fresh tumour tissue were cultivated, and six new permanent cell lines were established from 18 HNSCC cases. Successfully grown cell lines were analysed regarding clinical parameters, histological grade, karyotype, DNA ploidy, and index and S-phase fraction (Spf. The cell lines were further characterized with regard to their uptake of 18F-FDG, their sensitivity to cisplatin, as measured by a viability test (crystal violet, and their TP53 and CCND1 status, by fluorescence in situ hybridization (FISH, polymerase chain reaction single-strand conformation polymorphism (PCR-SSCP with DNA sequencing and, for cyclin D1, by immunohistochemistry. Results Patients with tumours that could be cultured in vitro had shorter disease-free periods and overall survival time than those whose tumours did not grow in vitro, when analysed with the Kaplan-Meier method and the log-rank test. Their tumours also showed more complex karyotypes than tumours from which cell lines could not be established. No correlation was found between TP53 or CCND1 status and 18F-FDG uptake or cisplatin sensitivity. However, there was an inverse correlation between tumour cell doubling time and 18F-FDG uptake. Conclusion In vitro growth of HNSCC

  16. Adaptive optimisation of a generalised phase contrast beam shaping system

    Science.gov (United States)

    Kenny, F.; Choi, F. S.; Glückstad, J.; Booth, M. J.

    2015-05-01

    The generalised phase contrast (GPC) method provides versatile and efficient light shaping for a range of applications. We have implemented a generalised phase contrast system that used two passes on a single spatial light modulator (SLM). Both the pupil phase distribution and the phase contrast filter were generated by the SLM. This provided extra flexibility and control over the parameters of the system including the phase step magnitude, shape, radius and position of the filter. A feedback method for the on-line optimisation of these properties was also developed. Using feedback from images of the generated light field, it was possible to dynamically adjust the phase filter parameters to provide optimum contrast.

  17. Optimising costs in WLCG operations

    CERN Document Server

    Pradillo, Mar; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-01-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastruc...

  18. Optimising Comprehensibility in Interlingual Translation

    DEFF Research Database (Denmark)

    Nisbeth Jensen, Matilde

    2015-01-01

    . It is argued that Plain Language writing is a type of intralingual translation as it involves rewriting or translating a complex monolingual text into comprehensible language. Based on Plain Language literature, a comprehensibility framework is elaborated, which is subsequently exemplified through...... the functional text type of Patient Information Leaflet. Finally, the usefulness of applying the principles of Plain Language and intralingual translation for optimising comprehensibility in interlingual translation is discussed....... information on medication and tax information. Such texts are often written by experts and received by lay people, and, in today’s globalised world, they are often translated as well. In these functional texts, the receiver is not a mere recipient of information, but s/he needs to be able to act upon it...

  19. Thyroid Scan and Uptake

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses ... the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is a ...

  20. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses ... the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is a ...

  1. Thyroid Scan and Uptake

    Science.gov (United States)

    ... Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses small ... Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is a type ...

  2. MACHINING OPTIMISATION AND OPERATION ALLOCATION FOR NC LATHE MACHINES IN A JOB SHOP MANUFACTURING SYSTEM

    Directory of Open Access Journals (Sweden)

    MUSSA I. MGWATU

    2013-08-01

    Full Text Available Numerical control (NC machines in a job shop may not be cost and time effective if the assignment of cutting operations and optimisation of machining parameters are overlooked. In order to justify better utilisation and higher productivity of invested NC machine tools, it is necessary to determine the optimum machining parameters and realize effective assignment of cutting operations on machines. This paper presents two mathematical models for optimising machining parameters and effectively allocating turning operations on NC lathe machines in a job shop manufacturing system. The models are developed as non-linear programming problems and solved using a commercial LINGO software package. The results show that the decisions of machining optimisation and operation allocation on NC lathe machines can be simultaneously made while minimising both production cost and cycle time. In addition, the results indicate that production cost and cycle time can be minimised while significantly reducing or totally eliminating idle times among machines.

  3. Statistical optimisation techniques in fatigue signal editing problem

    Energy Technology Data Exchange (ETDEWEB)

    Nopiah, Z. M.; Osman, M. H. [Fundamental Engineering Studies Unit Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia); Baharin, N.; Abdullah, S. [Department of Mechanical and Materials Engineering Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM (Malaysia)

    2015-02-03

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  4. Numerical optimisation of an axial turbine; Numerische Optimierung einer Axialturbine

    Energy Technology Data Exchange (ETDEWEB)

    Welzel, B.

    1998-12-31

    The author presents a method for automatic shape optimisation of components with internal or external flow. The method combines a program for numerical calculation of frictional turbulent flow with an optimisation algorithm. Algorithms are a simplex search strategy and an evolution strategy. The shape of the component to be optimized is variable due to shape parameters modified by the algorithm. For each shape, a flow calculation is carried out on whose basis a functional value like performance, loss, lift or resistivity is calculated. For validation, the optimisation method is used in simple examples with known solutions. It is applied. It is applied to the components of a slow-running axial turbine. Components with accelerated and delayed rotationally symmetric flow and 2D blade profiles are optimized. [Deutsch] Es wird eine Methode zur automatischen Formoptimierung durchstroemter oder umstroemter Bauteile vorgestellt. Diese koppelt ein Programm zur numerischen Berechnung reibungsbehafteter turbulenter Stroemungen mit einem Optimierungsalgorithmus. Dabei kommen als Algorithmen eine Simplex-Suchstrategie und eine Evolutionsstrategie zum Einsatz. Die Form des zu optimierenden Koerpers ist durch Formparameter, die vom Algorithmus veraendert werden, variabel. Fuer jede Form wird eine Stroemungsberechnung durchgefuehrt und mit dieser ein Funktionswert wie Wirkungsgrad, Verlust, Auftrieb oder Widerstandskraft berechnet. Die Optimierungsmethode wird zur Validierung in einfachen Beispielen mit bekannter Loesung eingesetzt. Zur Anwendung kommt sie in den einzelnen Komponenten einer langsamlaeufigen Axialturbine. Es werden Bauteile mit beschleunigter und verzoegerter rotationssymmetrischer Stroemung und 2D-Schaufelprofile optimiert. (orig.)

  5. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    Science.gov (United States)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-03-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm (GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  6. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  7. Topology optimisation for natural convection problems

    CERN Document Server

    Alexandersen, Joe; Andreasen, Casper Schousboe; Sigmund, Ole

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.

  8. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  9. User perspectives in public transport timetable optimisation

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    The present paper deals with timetable optimisation from the perspective of minimising the waiting time experienced by passengers when transferring either to or from a bus. Due to its inherent complexity, this bi-level minimisation problem is extremely difficult to solve mathematically, since...... on the large-scale public transport network in Denmark. The timetable optimisation approach yielded a yearly reduction in weighted waiting time equivalent to approximately 45 million Danish kroner (9 million USD)....

  10. Analysing bone regeneration using topological optimisation

    Directory of Open Access Journals (Sweden)

    Diego Alexander Garzón Alvarado

    2010-04-01

    Full Text Available The present article's object is to present the mathematical foundations of topological optimisation aimed at carrying out a study of bone regeneration. Bone structure can be economically adopted to different mechanical demands responding to topological optimisation models (having "minimum" mass and "high" resistance. Such analysis is essential for formulating physical therapy in patients needing partial or total strengthening of a particular bone's tissue structure. A mathematical model is formulated, as are the methods for resolving it.

  11. Pre-Industry-Optimisation of the Laser Welding Process

    DEFF Research Database (Denmark)

    Gong, Hui

    This dissertation documents the investigations into on-line monitoring the CO2 laser welding process and optimising the process parameters for achieving high quality welds. The requirements for realisation of an on-line control system are, first of all, a clear understanding of the dynamic...... phenomena of the laser welding process including the behaviour of the keyhole and plume, and the correlation between the adjustable process parameters: laser power, welding speed, focal point position, gas parameters etc. and the characteristics describing the quality of the weld: seam depth and width......, porosity etc. Secondly, a reliable monitoring system for sensing the laser-induced plasma and plume emission and detecting weld defects and process parameter deviations from the optimum conditions. Finally, an efficient control system with a fast signal processor and a precise feed-back controller...

  12. Optimising costs in WLCG operations

    Science.gov (United States)

    Alandes Pradillo, María; Dimou, Maria; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-12-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastructure and the computing models of the experiments.

  13. Optimisation of confinement in a fusion reactor using a nonlinear turbulence model

    CERN Document Server

    Highcock, E G; Barnes, M; Dorland, W

    2016-01-01

    The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. An optimal configuration is discovered at an elongation of 1.5 and a triangularity of 0.03.

  14. Semiconductor lasers as integrated optical biosensors: sensitivity optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Coote, J; Sweeney, S J [Advanced Technology Institute, University of Surrey, Guildford, UK GU2 7XH (United Kingdom)

    2007-07-15

    Semiconductor lasers contain both a light source and waveguide, rendering them suitable for adaptation to evanescent field biosensing. One-dimensional simulations using the beam propagation method have been carried out for planar semiconductor waveguide structures, with a view to maximising sensitivity of the effective index to changes in the refractive index and thickness of a film on the waveguide surface. Various structural parameters are investigated and it is found that thinning the upper cladding layer maximises the sensitivity. Implications for laser operation are considered, and an optimised structure is proposed. Surface layer index and thickness resolutions of 0.2 and 2nm are predicted.

  15. An optimisation framework for determination of capacity in railway networks

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup

    2015-01-01

    Within the railway industry, high quality estimates on railway capacity is crucial information, that helps railway companies to utilise the expensive (infrastructure) resources as efficiently as possible. This paper therefore proposes an optimisation framework to estimate the capacity of a railway...... to the train types. This is done using a mathematical model which is solved with a heuristic. The developed approach is used on a case network to obtain the capacity of the given railway system. Furthermore, we test different parameters to explore computation time, precision and sensitivity to input...

  16. Transmitter antenna placement in indoor environments using particle swarm optimisation

    Science.gov (United States)

    Talepour, Zeinab; Tavakoli, Saeed; Ahmadi-Shokouh, Javad

    2013-07-01

    The aim of this article is to suitably locate the minimum number of transmitter antennas in a given indoor environment to achieve good propagation coverage. To calculate the electromagnetic field in various points of the environment, we develop a software engine, named ray-tracing engine (RTE), in Matlab. To achieve realistic calculations, all parameters of geometry and material of building are considered. Particle swarm optimisation is employed to determine good location of transmitters. Simulation results show that a full coverage is obtained through suitably locating three transmitters.

  17. Optimisation of Forming Parameters to Reach Near Net Shape

    DEFF Research Database (Denmark)

    Ravn, Bjarne Gottlieb; Wanheim, Tarras

    2000-01-01

    Demands to the manufacturing industry have always been that the overall production expenses for a given component should be decreased. In the metal forming area, one way to reduce the expenses is to increase the complexity of the component and to reduce the number of steps in the production...

  18. Optimisation of Forming Parameters to Reach Near Net Shape

    DEFF Research Database (Denmark)

    Ravn, Bjarne Gottlieb; Wanheim, Tarras

    2000-01-01

    Demands to the manufacturing industry have always been that the overall production expenses for a given component should be decreased. In the metal forming area, one way to reduce the expenses is to increase the complexity of the component and to reduce the number of steps in the production...... and thereby reduce the overall production time. This can be achieved by producing the component as close as possible to the final shape which means that subsequent working can be minimised or even unnecessary. This philosophy - Net Shape Forming - has been thoroughly investigated in a number of projects...

  19. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    Science.gov (United States)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  20. Energetic analysis and optimisation of an integrated coal gasification-combined cycle power plant

    NARCIS (Netherlands)

    Vlaswinkel, E.E.

    1992-01-01

    Methods are presented to analyse and optimise the energetic performance of integrated coal gasification-combined cycle (IGCC) power plants. The methods involve exergy analysis and pinch technology and can be used to identify key process parameters and to generate alternative design options for impro

  1. Optimisation Modelling of Efficiency of Enterprise Restructuring

    Directory of Open Access Journals (Sweden)

    Yefimova Hanna V.

    2014-03-01

    Full Text Available The article considers issues of optimisation of the use of resources directed at restructuring of a shipbuilding enterprise, which is the main prerequisite of its efficiency. Restructuring is considered as a process of complex and interconnected change in the structure of assets, liabilities, enterprise functions, initiated by dynamic environment, which is based on the strategic concept of its development and directed at increase of efficiency of its activity, which is expressed in the growth of cost. The task of making a decision to restructure a shipbuilding enterprise and selection of a specific restructuring project refers to optimisation tasks of prospective planning. Enterprise resources that are allocated for restructuring serve as constraints of the mathematical model. Main criteria of optimisation are maximisation of pure discounted income or minimisation of expenditures on restructuring measures. The formed optimisation model is designed for assessment of volumes of attraction of own and borrowed funds for restructuring. Imitation model ensures development of cash flows. The task solution is achieved on the basis of the complex of interrelated optimisation and imitation models and procedures on formation, selection and co-ordination of managerial decisions.

  2. Optimisation of Investment Resources at Small Enterprises

    Directory of Open Access Journals (Sweden)

    Shvets Iryna B.

    2014-03-01

    Full Text Available The goal of the article lies in the study of the process of optimisation of the structure of investment resources, development of criteria and stages of optimisation of volumes of investment resources for small enterprises by types of economic activity. The article characterises the process of transformation of investment resources into assets and liabilities of the balances of small enterprises and conducts calculation of the structure of sources of formation of investment resources in Ukraine at small enterprises by types of economic activity in 2011. On the basis of the conducted analysis of the structure of investment resources of small enterprises the article forms main groups of criteria of optimisation in the context of individual small enterprises by types of economic activity. The article offers an algorithm and step-by-step scheme of optimisation of investment resources at small enterprises in the form of a multi-stage process of management of investment resources in the context of increase of their mobility and rate of transformation of existing resources into investments. The prospect of further studies in this direction is development of a structural and logic scheme of optimisation of volumes of investment resources at small enterprises.

  3. Optimisation of logistics processes of energy grass collection

    Science.gov (United States)

    Bányai, Tamás.

    2010-05-01

    The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The

  4. Vertical transportation systems embedded on shuffled frog leaping algorithm for manufacturing optimisation problems in industries.

    Science.gov (United States)

    Aungkulanon, Pasura; Luangpaiboon, Pongchanun

    2016-01-01

    Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.

  5. Optimising Performance of a Cantilever-type Micro Accelerometer Sensor

    Directory of Open Access Journals (Sweden)

    B.P. Joshi

    2007-05-01

    Full Text Available A technique for optimising performance of cantilever-type micro acceleration sensor hasbeen developed. Performance of a sensor is judged mainly by its sensitivity and bandwidth.Maximising product of these two important parameters of inertial sensors helps to optimise thesensor performance. It is observed that placement of a lumped mass (add-mass on the sensor'sproof-mass helps to control both sensitivity and the first resonant frequency of the cantileverstructure to the designer's choice. Simulation and modelling of various dimensions of rectangularstructures for acceleration sensor with this novel add-mass technique are discussed. CoventorwareMEMSCAD has been used to model, simulate, and carry out FEM analysis. A simple analyticalmodel is discussed to elaborate the mechanics of cantilever-type micro accelerometer. Thecomparison of the results obtained from analytical model and the finite element simulations revealthese to be in good agreement. The advantages of this technique for choosing the two mostimportant sensor parameters (i.e., sensitivity and bandwidth of an inertial sensor are brought out.

  6. Multicriteria Optimisation in Logistics Forwarder Activities

    Directory of Open Access Journals (Sweden)

    Tanja Poletan Jugović

    2007-05-01

    Full Text Available Logistics forwarder, as organizer and planner of coordinationand integration of all the transport and logistics chains elements,uses adequate ways and methods in the process of planningand decision-making. One of these methods, analysed inthis paper, which could be used in optimisation of transportand logistics processes and activities of logistics forwarder, isthe multicriteria optimisation method. Using that method, inthis paper is suggested model of multicriteria optimisation of logisticsforwarder activities. The suggested model of optimisationis justified in keeping with method principles of multicriteriaoptimization, which is included in operation researchmethods and it represents the process of multicriteria optimizationof variants. Among many different processes of multicriteriaoptimization, PROMETHEE (Preference Ranking OrganizationMethod for Enrichment Evaluations and Promcalc& Gaia V. 3.2., computer program of multicriteria programming,which is based on the mentioned process, were used.

  7. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Andreasen, Casper Schousboe; Aage, Niels

    conduction governs in the solid parts of the design domain and couples to convection-dominated heat transfer to a surrounding fluid. Both loosely coupled and tightly coupled problems are considered. The loosely coupled problems are convection-diffusion problems, based on an advective velocity field from......The work focuses on applying topology optimisation to forced and natural convection problems in fluid dynamics and conjugate (fluid-structure) heat transfer. To the authors' knowledge, topology optimisation has not yet been applied to natural convection flow problems in the published literature...... and the current work is thus seen as contributing new results to the field. In the literature, most works on the topology optimisation of weakly coupled convection-diffusion problems focus on the temperature distribution of the fluid, but a selection of notable exceptions also focusing on the temperature...

  8. An investigation into the Gustafsson limit for small planar antennas using optimisation

    CERN Document Server

    Shahpari, Morteza; Lewis, Andrew

    2013-01-01

    The fundamental limit for small antennas provides a guide to the effectiveness of designs. Gustafsson et al, Yaghjian et al, and Mohammadpour-Aghdam et al independently deduced a variation of the Chu-Harrington limit for planar antennas in different forms. Using a multi-parameter optimisation technique based on the ant colony algorithm, planar, meander dipole antenna designs were selected on the basis of lowest resonant frequency and maximum radiation efficiency. The optimal antenna designs across the spectrum from 570 to 1750 MHz occupying an area of $56mm \\times 25mm$ were compared with these limits calculated using the polarizability tensor. The results were compared with Sievenpiper's comparison of published planar antenna properties. The optimised antennas have greater than 90% polarizability compared to the containing conductive box in the range $0.3optimisation algorithm. The generalized absorption efficiency of the small meander line antennas is less than 50%, and resu...

  9. [Process optimisation: from theory to practical implementation].

    Science.gov (United States)

    Töpfer, Armin

    2010-01-01

    Today process optimisation is an indispensable approach to mastering the current challenges of modern health care management. The objective is to design business processes free of defects and free of waste as well as their monitoring and controlling with meaningful test statistics. Based on the identification of essential key performance indicators, key success factors and value cash generators two basic approaches to process optimisation, which are well-established and widely used in the industry, are now being implemented in the health care sector as well: Lean Management and Six Sigma.

  10. Simulating stem growth using topological optimisation

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Narváez

    2010-04-01

    Full Text Available Engineers are currently resorting to observations of nature for making new designs. Studying the functioning of bodies of plants and animals has required them to be modelled and simulated; however, some models born from engineering problems could be used for such purposes. This article shows how topological optimisation (a mathematical model for optimising designing structural elements can be used for modeling and simulating the way a stem grows in terms of carrying out its funtion of providing support for the leaves and a plant's other upper organs.

  11. Bat Algorithm for Multi-objective Optimisation

    CERN Document Server

    Yang, Xin-She

    2012-01-01

    Engineering optimization is typically multiobjective and multidisciplinary with complex constraints, and the solution of such complex problems requires efficient optimization algorithms. Recently, Xin-She Yang proposed a bat-inspired algorithm for solving nonlinear, global optimisation problems. In this paper, we extend this algorithm to solve multiobjective optimisation problems. The proposed multiobjective bat algorithm (MOBA) is first validated against a subset of test functions, and then applied to solve multiobjective design problems such as welded beam design. Simulation results suggest that the proposed algorithm works efficiently.

  12. Topology Optimisation of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Thike Aye Min

    2016-01-01

    Full Text Available Wireless sensor networks are widely used in a variety of fields including industrial environments. In case of a clustered network the location of cluster head affects the reliability of the network operation. Finding of the optimum location of the cluster head, therefore, is critical for the design of a network. This paper discusses the optimisation approach, based on the brute force algorithm, in the context of topology optimisation of a cluster structure centralised wireless sensor network. Two examples are given to verify the approach that demonstrate the implementation of the brute force algorithm to find an optimum location of the cluster head.

  13. Optimisation of interventional cardiology procedures; Optimisation des procedures en cardiologie interventionnelle

    Energy Technology Data Exchange (ETDEWEB)

    Bar, Olivier [SELARL, Cardiologie Interventionnelle Imagerie Cardiaque - CIIC, 8, place de la Cathedrale - 37042 Tours (France)

    2011-07-15

    Radiation-guided procedures in interventional cardiology include diagnostic and/or therapeutic procedures, primarily coronary catheterization and coronary angioplasty. Application of the principles of radiation protection and the use of optimised procedures are contributing to dose reduction while maintaining the radiological image quality necessary for performance of the procedures. The mandatory training in patient radiation protection and technical training in the use of radiology devices mean that implementing continuous optimisation of procedures is possible in practice. This optimisation approach is the basis of patient radiation protection; when associated with the wearing of protective equipment it also contributes to the radiation protection of the cardiologists. (author)

  14. Friction stir welding: multi-response optimisation using Taguchi-based GRA

    Directory of Open Access Journals (Sweden)

    Jitender Kundu

    2016-01-01

    Full Text Available In present experimental work, friction stir welding of aluminium alloy 5083- H321 is performed for optimisation of process parameters for maximum tensile strength. Taguchi’s L9 orthogonal array has been used for three parameters – tool rotational speed (TRS, traverse speed (TS, and tool tilt angle (TTA with three levels. Multi-response optimisation has been carried out through Taguchi-based grey relational analysis. The grey relational grade has been calculated for all three responses – ultimate tensile strength, percentage elongation, and micro-hardness. Analysis of variance is the tool used for obtaining grey relational grade to find out the significant process parameters. TRS and TS are the two most significant parameters which influence most of the quality characteristics of friction stir welded joint. Validation of predicted values done through confirmation experiments at optimum setting shows a good agreement with experimental values.

  15. Extending Particle Swarm Optimisers with Self-Organized Criticality

    DEFF Research Database (Denmark)

    Løvbjerg, Morten; Krink, Thiemo

    2002-01-01

    Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.......Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions....

  16. Particle Swarm Optimisation with Spatial Particle Extension

    DEFF Research Database (Denmark)

    Krink, Thiemo; Vesterstrøm, Jakob Svaneborg; Riget, Jacques

    2002-01-01

    In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed...

  17. Topology optimisation of natural convection problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equation...

  18. Thermodynamic optimisation of a heat exchanger

    NARCIS (Netherlands)

    Cornelissen, R.L.; Hirs, G.G.

    1999-01-01

    The objective of this paper is to show that for the optimal design of an energy system, where there is a trade-off between exergy saving during operation and exergy use during construction of the energy system, exergy analysis and life cycle analysis should be combined. An exergy optimisation of a h

  19. Optimised Design of Transparent Optical Domains

    DEFF Research Database (Denmark)

    Hanik, N.; Caspar, C.; Schmidt, F.;

    2000-01-01

    Three different design concepts for transparent, dispersion compensated, optical WDM transmission links are optimised numerically and experimentally for 10 Gbit/s data rate per channel. It is shown that robust transparent domains of 1,500 km in diameter can be realised using simple design rutes....

  20. Cell uptake survey of pegylated nanographene oxide

    Science.gov (United States)

    Vila, M.; Portolés, M. T.; Marques, P. A. A. P.; Feito, M. J.; Matesanz, M. C.; Ramírez-Santillán, C.; Gonçalves, G.; Cruz, S. M. A.; Nieto, A.; Vallet-Regi, M.

    2012-11-01

    Graphene and more specifically, nanographene oxide (GO) has been proposed as a highly efficient antitumoral therapy agent. Nevertheless, its cell uptake kinetics, its influence in different types of cells and the possibility of controlling cellular internalization timing, is still a field that remains unexplored. Herein, different cell types have been cultured in vitro for several incubation periods in the presence of 0.075 mg ml-1 pegylated GO solutions. GO uptake kinetics revealed differences in the agent’s uptake amount and speed as a function of the type of cell involved. Osteoblast-like cells GO uptake is higher and faster without resulting in greater cell membrane damage. Moreover, the dependence on the commonly used PEG nature (number of branches) also influences the viability and cell uptake speed. These facts play an important role in the future definition of timing parameters and selective cell uptake control in order to achieve an effective therapy.

  1. Track gauge optimisation of railway switches using a genetic algorithm

    Science.gov (United States)

    Pålsson, Björn A.; Nielsen, Jens C. O.

    2012-01-01

    A methodology for the optimisation of a prescribed track gauge variation (gauge widening) in the switch panel of a railway turnout (switch and crossing, S&C) is presented. The aim is to reduce rail profile degradation. A holistic approach is applied, where both routes and travel directions (moves) of traffic in the switch panel are considered simultaneously. The problem is formulated as a multi-objective minimisation problem which is solved using a genetic-type optimisation algorithm which provides a set of Pareto optimal solutions. The dynamic vehicle-turnout interaction is evaluated using a multi-body simulation tool and the energy dissipation in the wheel-rail contacts is used for the assessment of gauge parameters. Two different vehicle models are used, one freight car and one passenger train set, and a stochastic spread in wheel profile and wheel-rail friction coefficient is accounted for. It is found that gauge configurations with a large gauge-widening amplitude for the stock rail on the field side are optimal for both the through and diverging routes, while the results for the gauge side show a larger route dependence. The optimal gauge configurations are observed to be similar for both vehicle types.

  2. Optimisation of Sintering Factors of Titanium Foams Using Taguchi Method

    Directory of Open Access Journals (Sweden)

    S. Ahmad

    2010-06-01

    Full Text Available Metal foams have the potential to be used in the production of bipolar plates in Polymer Electron Membrane Fuel Cells (PEMFC. In this paper, pure titanium was used to prepare titanium foam using the slurry method. The electrical conductivity is the most important parameter to be considered in the production of good bipolar plates. To achieve a high conductivity of the titanium foam, the effects of various parameters including temperature, time profile and composition have to be characterised and optimised. This paper reports the use of the Taguchi method in optimising the processing parameters of pure titanium foams. The effects of four sintering factors, namely, composition, sintering temperature, heating rate and soaking time on the electrical conductivity has been studied. The titanium slurry was prepared by mixing titanium alloy powder, polyethylene glycol (PEG, methylcellulose and water. Polyurethane (PU foams were then impregnated into the slurry and later dried at room temperature. These were next sintered in a high temperature vacuum furnace. The various factors were assigned to an L9 orthogonal array. From the Analysis of Variance (ANOVA, the composition of titanium powder has the highest percentage of contribution (24.51 to the electrical conductivity followed by the heating rate (10.29. The optimum electrical conductivity was found to be 1336.227 ± 240.61 S/cm-1 for this titanium foam. It was achieved with a 70% composition of titanium, sintering temperature of 1200oC, a heating rate of 0.5oC/min and 2 hours soaking time. Confirmatory experiments have produced results that lay within the 90% confidence interval.

  3. Towards 'smart lasers': self-optimisation of an ultrafast pulse source using a genetic algorithm

    CERN Document Server

    Woodward, R I

    2016-01-01

    Short-pulse fibre lasers are a complex dynamical system possessing a broad space of operating states that can be accessed through control of cavity parameters. Determination of target regimes is a multi-parameter global optimisation problem. Here, we report the implementation of a genetic algorithm to intelligently locate optimum parameters for stable single-pulse mode-locking in a Figure-8 fibre laser, and fully automate the system turn-on procedure. Stable ultrashort pulses are repeatably achieved by employing a compound fitness function that monitors both temporal and spectral output properties of the laser. Our method of encoding photonics expertise into an algorithm and applying machine-learning principles paves the way to self-optimising `smart' optical technologies.

  4. Minimisation of the wall shear stress gradients in bypass grafts anastomoses using meshless CFD and genetic algorithms optimisation.

    Science.gov (United States)

    El Zahab, Zaher; Divo, Eduardo; Kassab, Alain

    2010-02-01

    The wall shear stress (WSS) spatial and temporal gradients are two hemodynamics parameters correlated with endothelial damage. Those two gradients become well pronounced in a bypass graft anastomosis geometry where the blood flow patterns are quite disturbed. The WSS gradient minimisation on the host artery floor can be achieved by optimising the anastomosis shape and hence may lead to an improved long-term post-surgical performance of the graft. The anastomosis shape optimisation can be executed via an integrated computational tool comprised of a meshless computational fluid dynamics (CFD) solver and a genetic algorithm (GA) shape optimiser. The meshless CFD solver serves to evaluate the WSS gradients and the GA optimiser serves to search for the end-to-side distal anastomosis (ETSDA) optimal shape that best minimises those gradients. We utilise a meshless CFD method to resolve hemodynamics and a GA for the purpose of optimisation. We consider three different anastomotic models: the conventional ETSDA, the Miller Cuff ETSDA and the hood ETSDA. The results reported herein demonstrate that the graft calibre should always be maximised whether a conventional or Miller Cuff ETSDA model is utilised. Also, it was noted that the Miller Cuff height should be minimised. The choice of an optimal anastomotic angle should be optimised to achieve a compromise between the concurrent minimisations of both the spatial WSS gradient and the temporal WSS gradient.

  5. Optimisation Methods for Cam Mechanisms

    Directory of Open Access Journals (Sweden)

    Claudia–Mari Popa

    2010-01-01

    Full Text Available In this paper we present the criteria which represent the base of optimizing the cam mechanisms and also we perform the calculations for several types of mechanisms. We study the influence of the constructive parameters in case of the simple machines with rotation cam and follower (flat or curve of translation on the curvature radius and that of the transmission angle. As it follows, we present the optimization calculations of the cam and flat rotation follower mechanisms, as well as the calculations for optimizing the cam mechanisms by circular groove followers’ help. For an easier interpretation of the results, we have visualized the obtained cam in AutoCAD according to the script files generated by a calculation program.

  6. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Uptake? A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) ... of thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that ...

  7. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... limitations of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan ... tissues in your body. top of page How is the procedure performed? Nuclear medicine imaging is usually ...

  8. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Uptake? A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) ... of thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that ...

  9. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is ... code: Phone no: Thank you! Do you have a personal story about radiology? Share your patient story ...

  10. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... known as a thyroid uptake. It is a measurement of thyroid function, but does not involve imaging. ... eating can affect the accuracy of the uptake measurement. Jewelry and other metallic accessories should be left ...

  11. Mechanisms of Ocean Heat Uptake

    Science.gov (United States)

    Garuba, Oluwayemi

    An important parameter for the climate response to increased greenhouse gases or other radiative forcing is the speed at which heat anomalies propagate downward in the ocean. Ocean heat uptake occurs through passive advection/diffusion of surface heat anomalies and through the redistribution of existing temperature gradients due to circulation changes. Atlantic meridional overturning circulation (AMOC) weakens in a warming climate and this should slow the downward heat advection (compared to a case in which the circulation is unchanged). However, weakening AMOC also causes a deep warming through the redistributive effect, thus increasing the downward rate of heat propagation compared to unchanging circulation. Total heat uptake depends on the combined effect of these two mechanisms. Passive tracers in a perturbed CO2 quadrupling experiments are used to investigate the effect of passive advection and redistribution of temperature anomalies. A new passive tracer formulation is used to separate ocean heat uptake into contributions due to redistribution and passive advection-diffusion of surface heating during an ocean model experiment with abrupt increase in surface temperature. The spatial pattern and mechanisms of each component are examined. With further experiments, the effects of surface wind, salinity and temperature changes in changing circulation and the resulting effect on redistribution in the individual basins are isolated. Analysis of the passive advection and propagation path of the tracer show that the Southern ocean dominates heat uptake, largely through vertical and horizontal diffusion. Vertical diffusion transports the tracer across isopycnals down to about 1000m in 100 years in the Southern ocean. Advection is more important in the subtropical cells and in the Atlantic high latitudes, both with a short time scale of about 20 years. The shallow subtropical cells transport the tracer down to about 500m along isopycnal surfaces, below this vertical

  12. Public transport optimisation emphasising passengers’ travel behaviour

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo

    to enhance the operations of public transport while explicitly emphasising passengers’ travel behaviour and preferences. Similar to economic theory, interactions between supply and demand are omnipresent in the context of public transport operations. In public transport, the demand is represented...... the published performance measures and what passengers actually experience, a large academic contribution of the current PhD study is the explicit consideration of passengers’ travel behaviour in optimisation studies and in the performance assessment. Besides the explicit passenger focus in transit planning...... at as the motivator for delay-robust railway timetables. Interestingly, passenger oriented optimisation studies considering robustness in railway planning typically limit their emphasis on passengers to the consideration of transfer maintenance. Clearly, passengers’ travel behaviour is more complex and multifaceted...

  13. Topology optimised planar photonic crystal building blocks

    DEFF Research Database (Denmark)

    Frandsen, Lars Hagedorn; Hede, K. K.; Borel, Peter Ingo

    A photonic crystal waveguide (PhCW) 1x4 splitter has been constructed from PhCW 60° bends1 and Y-splitters2 that have been designed individually by utilising topology optimisation3. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1) and exhibits a broadband splitting...... for the TE-polarisation with an average excess loss of 1.55±0.54 dB for a 110 nm bandwidth. The 1x4 splitter demonstrates that individual topology-optimised parts can be used as building blocks to realise high-performance nanophotonic circuits. 1L. H. Frandsen et al., Opt. Express 12, 5916-5921 (2004) 2P. I...

  14. Self-optimising control of sewer systems

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Montero-Castro, I.; Mollerup, Ane Loft

    2013-01-01

    Self-optimising control is a useful concept to select optimal controlled variables from a set of candidate measurements in a systematic manner. In this study, use self-optimizing control tools and apply them to the specific features of sewer systems, e.g. the continuously transient dynamics......, the availability of a large number of measurements, the stochastic and unforeseeable character of the disturbances (rainfall). Using a subcatchment area in the Copenhagen sewer system as a case study we demonstrate, step by step, the formulation of the self-optimising control problem. The final result...... is an improved control structure aimed at optimizing the losses for a given control objective, here the minimization of the combined sewer overflows despite rainfall variations....

  15. Improved Squeaky Wheel Optimisation for Driver Scheduling

    CERN Document Server

    Aickelin, Uwe; Li, Jingpeng

    2008-01-01

    This paper presents a technique called Improved Squeaky Wheel Optimisation for driver scheduling problems. It improves the original Squeaky Wheel Optimisations effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedul...

  16. Buckling optimisation of sandwich cylindrical panels

    Science.gov (United States)

    Abouhamzeh, M.; Sadighi, M.

    2016-06-01

    In this paper, the buckling load optimisation is performed on sandwich cylindrical panels. A finite element program is developed in MATLAB to solve the governing differential equations of the global buckling of the structure. In order to find the optimal solution, the genetic algorithm Toolbox in MATLAB is implemented. Verifications are made for both the buckling finite element code and also the results from the genetic algorithm by comparisons to the results available in literature. Sandwich cylindrical panels are optimised for the buckling strength with isotropic or orthotropic cores with different boundary conditions. Results are presented in terms of stacking sequence of fibers in the face sheets and core to face sheet thickness ratio.

  17. Applying the Theory of Optimising Professional Life

    Directory of Open Access Journals (Sweden)

    Lesley Margaret Piko

    2014-12-01

    Full Text Available Glaser (2014 wrote that “the application of grounded theory (GT is a relatively neglected topic” (p. 1 in the literature. Applying GT to purposely intervene and improve a situation is an important adjunct to our knowledge and understanding of GT. A recent workshop of family doctors and general practitioners provides a useful example. The theory of optimising professional life explains that doctors are concerned about sustainment in their career and, to resolve this concern, they implement solutions to optimise their personal situation. Sustainment is a new, overarching concept of three needs: the need for self-care to sustain well-being, the need for work interest to sustain motivation, and the need for income to sustain lifestyle. The objective of the workshop was to empower doctors to reinvent their careers using this theory. Working individually and in small groups, participants were able to analyse a problem and to identify potential solutions.

  18. Fermionic orbital optimisation in tensor network states

    CERN Document Server

    Krumnow, C; Eisert, J

    2015-01-01

    Tensor network states and specifically matrix-product states have proven to be a powerful tool for simulating ground states of strongly correlated spin models. Recently, they have also been applied to interacting fermionic problems, specifically in the context of quantum chemistry. A new freedom arising in such non-local fermionic systems is the choice of orbitals, it being far from clear what choice of fermionic orbitals to make. In this work, we propose a way to overcome this challenge. We suggest a method intertwining the optimisation over matrix product states with suitable fermionic Gaussian mode transformations, hence bringing the advantages of both approaches together. The described algorithm generalises basis changes in the spirit of the Hartree-Fock methods to matrix-product states, and provides a black box tool for basis optimisations in tensor network methods.

  19. Analysis and Optimisation of Pulse Dynamics for Magnetic Stimulation

    CERN Document Server

    Goetz, Stefan M; Gerhofer, Manuel G; Weyh, Thomas; Herzog, Hans-Georg

    2011-01-01

    Magnetic stimulation is a standard tool in brain research and many fields of neurology, as well as psychiatry. From a physical perspective, one key aspect of this method is the inefficiency of available setups. Whereas the spatial field properties have been studied rather intensively with coil designs, the dynamics have been neglected almost completely for a long time. Instead, the devices and their technology defined the waveform. Here, an analysis of the waveform space is performed. Based on these data, an appropriate optimisation approach is outlined which makes use of a modern nonlinear axon description of a mammalian motor nerve. The approach is based on a hybrid global-local method; different coordinate systems for describing the continuous waveforms in a limited parameter space are defined for sufficient stability. The results of the numeric setup suggest that there is plenty of room for waveforms with higher efficiency than the traditional shapes. One class of such pulses is analysed further. Although...

  20. Adaptive Java Optimisation using machine learning techniques

    OpenAIRE

    Long, Shun

    2004-01-01

    There is a continuing demand for higher performance, particularly in the area of scientific and engineering computation. In order to achieve high performance in the context of frequent hardware upgrading, software must be adaptable for portable performance. What is required is an optimising compiler that evolves and adapts itself to environmental change without sacrificing performance. Java has emerged as a dominant programming language widely used in a variety of application areas. Howeve...

  1. Optimised polarisation measurements on Bragg peaks

    Energy Technology Data Exchange (ETDEWEB)

    Lelievre-Berna, E. [Institut Laue Langevin, 6 rue Jules Horowitz, 38042 Grenoble Cedex 9 (France)]. E-mail: lelievre@ill.fr; Brown, P.J. [Institut Laue Langevin, 6 rue Jules Horowitz, 38042 Grenoble Cedex 9 (France); Tasset, F. [Institut Laue Langevin, 6 rue Jules Horowitz, 38042 Grenoble Cedex 9 (France)

    2007-07-15

    Experimentally the asymmetry A (or the flipping ratio R) is deduced from the two count rates observed for |+> and |-> neutron spin states. Since the count rates for the two spin states may be quite different and both require to be corrected for background, the optimum strategy for the measurement is important. We present here the theory for optimisation of the accuracy of measurement of A (or R) within the constraint of a fixed total measuring time.

  2. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez

    2014-09-16

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  3. Designing Lead Optimisation of MMP-12 Inhibitors

    Directory of Open Access Journals (Sweden)

    Matteo Borrotti

    2014-01-01

    Full Text Available The design of new molecules with desired properties is in general a very difficult problem, involving heavy experimentation with high investment of resources and possible negative impact on the environment. The standard approach consists of iteration among formulation, synthesis, and testing cycles, which is a very long and laborious process. In this paper we address the so-called lead optimisation process by developing a new strategy to design experiments and modelling data, namely, the evolutionary model-based design for optimisation (EDO. This approach is developed on a very small set of experimental points, which change in relation to the response of the experimentation according to the principle of evolution and insights gained through statistical models. This new procedure is validated on a data set provided as test environment by Pickett et al. (2011, and the results are analysed and compared to the genetic algorithm optimisation (GAO as a benchmark. The very good performance of the EDO approach is shown in its capacity to uncover the optimum value using a very limited set of experimental points, avoiding unnecessary experimentation.

  4. Procedure for Application-Oriented Optimisation of Marine Propellers

    Directory of Open Access Journals (Sweden)

    Florian Vesting

    2016-11-01

    Full Text Available The use of automated optimisation in engineering applications is emerging. In particular, nature inspired algorithms are frequently used because of their variability and robust application in constraints and multi-objective optimisation problems. The purpose of this paper is the comparison of four different algorithms and several optimisation strategies on a set of seven test propellers in realistic industrial design setting. The propellers are picked from real commercial projects and the manual final designs were delivered to customers. The different approaches are evaluated and final results of the automated optimisation toolbox are compared with designs generated in a manual design process. We identify a two-stage optimisation for marine propellers, where the geometry is first modified by parametrised geometry distribution curves to gather knowledge of the test case. Here we vary the optimisation strategy in terms of applied algorithms, constraints and objectives. A second supporting optimisation aims to improve the design by locally changing the geometry, based on the results of the first optimisation. The optimisation algorithms and strategies yield propeller designs that are comparable to the manually designed propeller blade geometries, thus being suitable as robust and advanced design support tools. The supporting optimisation, with local modification of the blade geometry and the proposed cavity shape constraints, features particular good performance in modifying cavitation on the blade and is, with the AS NSGA-II (adaptive surrogate-assisted NSGA-II, superior in lead time.

  5. Uptake of copper and cerium by alfalfa, lettuce and cucumber exposed to nCeO2 and nCuO through the foliage or the roots: Impacts on food quality, physiological and agronomical parameters

    Science.gov (United States)

    Hong, Jie

    Nanotechnology is increasingly attracting attention not only for its variety of applications in modern life, but for the potential negative effects that nanomaterials (NMs) can cause in the environment and human health. Studies have shown varied effects of engineered nanoparticles (ENPs) on plants; however, most of these studies focused on the interaction of NPs with plants at root level. The increasing production and use of NPs have also increased the atmospheric amounts of NPs, which could be taken up by plants through their leaves. Cucumbers (Cucumis sativus L.) are broad leaf plants commonly grown both commercially and in home vegetable gardens that can be easily impacted by atmospheric NPs. However, there is limited information about the potential effects of these atmospheric NPs on cucumber. This research was aimed to determine (I) the possible uptake and translocation of cerium (Ce) by cucumber plants exposed to nCeO 2 (cerium dioxide nanoparticles, nanoceria) through the foliage, (II) the impacts of the NPs on physiological parameters of the plants and the effects on the nutritional value and quality of the fruits, and (III) the effects of seven copper compounds/nanoparticles applied to the growth medium of lettuce (Lactuca sativa) and alfalfa (Medicago sativa). For aim I, 15 day-old hydroponically grown cucumber plants were exposed to nCeO2, either as powder at 0.98 and 2.94 g/m3 or suspensions at 20, 40, 80, 160, 320 mg/l. Ce uptake was analyzed by using inductively coupled plasma-optical emission spectroscopy (ICP-OES) and transmission electron microscope (TEM). The activity of three stress enzymes was measured by UV/Vis. Ce was detected in all cucumber tissues and TEM images showed the presence of Ce in roots. Results suggested nCeO2 penetrated plants through leaves and moved to other plant parts. The biochemical assays showed nCeO2 also modified stress enzyme activities. For aim II, 15 day-old soil grown cucumber plants were foliar treated, separately

  6. Artificial Intelligence Mechanisms on Interactive Modified Simplex Method with Desirability Function for Optimising Surface Lapping Process

    Directory of Open Access Journals (Sweden)

    Pongchanun Luangpaiboon

    2014-01-01

    Full Text Available A study has been made to optimise the influential parameters of surface lapping process. Lapping time, lapping speed, downward pressure, and charging pressure were chosen from the preliminary studies as parameters to determine process performances in terms of material removal, lap width, and clamp force. The desirability functions of the-nominal-the-best were used to compromise multiple responses into the overall desirability function level or D response. The conventional modified simplex or Nelder-Mead simplex method and the interactive desirability function are performed to optimise online the parameter levels in order to maximise the D response. In order to determine the lapping process parameters effectively, this research then applies two powerful artificial intelligence optimisation mechanisms from harmony search and firefly algorithms. The recommended condition of (lapping time, lapping speed, downward pressure, and charging pressure at (33, 35, 6.0, and 5.0 has been verified by performing confirmation experiments. It showed that the D response level increased to 0.96. When compared with the current operating condition, there is a decrease of the material removal and lap width with the improved process performance indices of 2.01 and 1.14, respectively. Similarly, there is an increase of the clamp force with the improved process performance index of 1.58.

  7. Optimising exercise training in prevention and treatment of diastolic heart failure (OptimEx-CLIN): rationale and design of a prospective, randomised, controlled trial.

    Science.gov (United States)

    Suchy, Christiane; Massen, Lilian; Rognmo, Oivind; Van Craenenbroeck, Emeline M; Beckers, Paul; Kraigher-Krainer, Elisabeth; Linke, Axel; Adams, Volker; Wisløff, Ulrik; Pieske, Burkert; Halle, Martin

    2014-11-01

    Heart failure with preserved left ventricular ejection fraction (HFpEF) currently affects more than seven million Europeans and is the only cardiovascular disease increasing in prevalence and incidence. No pharmacological agent has yet been shown to improve symptoms or prognosis. The most promising way to improve pathophysiology and deprived exercise-tolerance in HFpEF patients seems to be exercise training, but the optimal approach and dose of exercise is still unknown. The major objective of the optimising exercise training in prevention and treatment of diastolic heart failure study (OptimEx-CLIN) is to define the optimal dose of exercise training in patients with HFpEF. In order to optimise adherence, supervision and economic aspects of exercise training a novel telemedical approach will be introduced and investigated. In a prospective randomised multi-centre study, 180 patients with stable symptomatic HFpEF will be randomised (1:1:1) to moderate intensity continuous training, high intensity interval training, or a control group. The training intervention includes three months supervised followed by nine months of telemedically monitored home-based training. The primary endpoint is change in exercise capacity, defined as change in peak oxygen uptake (VO2peak) after three months, assessed by cardiopulmonary exercise testing. Secondary endpoints include diastolic filling pressure (E/e') and further echocardiographic and cardiopulmonary exercise testing (CPX) parameters, biomarkers, quality of life and endothelial function. Training sessions and physical activity will be monitored and documented throughout the study with accelerometers and heart rate monitors developed on a telemedical platform for the OptimEx-CLIN study. Inclusion of patients started in July 2014, first results are expected in 2017. © Authors 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Railway vehicle performance optimisation using virtual homologation

    Science.gov (United States)

    Magalhães, H.; Madeira, J. F. A.; Ambrósio, J.; Pombo, J.

    2016-09-01

    Unlike regular automotive vehicles, which are designed to travel in different types of roads, railway vehicles travel mostly in the same route during their life cycle. To accept the operation of a railway vehicle in a particular network, a homologation process is required according to local standard regulations. In Europe, the standards EN 14363 and UIC 518, which are used for railway vehicle acceptance, require on-track tests and/or numerical simulations. An important advantage of using virtual homologation is the reduction of the high costs associated with on-track tests by studying the railway vehicle performance in different operation conditions. This work proposes a methodology for the improvement of railway vehicle design with the objective of its operation in selected railway tracks by using optimisation. The analyses required for the vehicle improvement are performed under control of the optimisation method global and local optimisation using direct search. To quantify the performance of the vehicle, a new objective function is proposed, which includes: a Dynamic Performance Index, defined as a weighted sum of the indices obtained from the virtual homologation process; the non-compensated acceleration, which is related to the operational velocity; and a penalty associated with cases where the vehicle presents an unacceptable dynamic behaviour according to the standards. Thus, the optimisation process intends not only to improve the quality of the vehicle in terms of running safety and ride quality, but also to increase the vehicle availability via the reduction of the time for a journey while ensuring its operational acceptance under the standards. The design variables include the suspension characteristics and the operational velocity of the vehicle, which are allowed to vary in an acceptable range of variation. The results of the optimisation lead to a global minimum of the objective function in which the suspensions characteristics of the vehicle are

  9. Optimised sensitivity to leptonic CP violation from spectral information: the LBNO case at 2300 km baseline

    CERN Document Server

    Agarwalla, S K; Aittola, M; Alekou, A; Andrieu, B; Antoniou, F; Asfandiyarov, R; Autiero, D; Bésida, O; Balik, A; Ballett, P; Bandac, I; Banerjee, D; Bartmann, W; Bay, F; Biskup, B; Blebea-Apostu, A M; Blondel, A; Bogomilov, M; Bolognesi, S; Borriello, E; Brancus, I; Bravar, A; Buizza-Avanzini, M; Caiulo, D; Calin, M; Calviani, M; Campanelli, M; Cantini, C; Cata-Danil, G; Chakraborty, S; Charitonidis, N; Chaussard, L; Chesneanu, D; Chipesiu, F; Crivelli, P; Dawson, J; De Bonis, I; Declais, Y; Sanchez, P Del Amo; Delbart, A; Di Luise, S; Duchesneau, D; Dumarchez, J; Efthymiopoulos, I; Eliseev, A; Emery, S; Enqvist, T; Enqvist, K; Epprecht, L; Erykalov, A N; Esanu, T; Franco, D; Friend, M; Galymov, V; Gavrilov, G; Gendotti, A; Giganti, C; Gilardoni, S; Goddard, B; Gomoiu, C M; Gornushkin, Y A; Gorodetzky, P; Haesler, A; Hasegawa, T; Horikawa, S; Huitu, K; Izmaylov, A; Jipa, A; Kainulainen, K; Karadzhov, Y; Khabibullin, M; Khotjantsev, A; Kopylov, A N; Korzenev, A; Kosyanenko, S; Kryn, D; Kudenko, Y; Kuusiniemi, P; Lazanu, I; Lazaridis, C; Levy, J -M; Loo, K; Maalampi, J; Margineanu, R M; Marteau, J; Martin-Mari, C; Matveev, V; Mazzucato, E; Mefodiev, A; Mineev, O; Mirizzi, A; Mitrica, B; Murphy, S; Nakadaira, T; Narita, S; Nesterenko, D A; Nguyen, K; Nikolics, K; Noah, E; Novikov, Yu; Oprima, A; Osborne, J; Ovsyannikova, T; Papaphilippou, Y; Pascoli, S; Patzak, T; Pectu, M; Pennacchio, E; Periale, L; Pessard, H; Popov, B; Ravonel, M; Rayner, M; Resnati, F; Ristea, O; Robert, A; Rubbia, A; Rummukainen, K; Saftoiu, A; Sakashita, K; Sanchez-Galan, F; Sarkamo, J; Saviano, N; Scantamburlo, E; Sergiampietri, F; Sgalaberna, D; Shaposhnikova, E; Slupecki, M; Smargianaki, D; Stanca, D; Steerenberg, R; Sterian, A R; Sterian, P; Stoica, S; Strabel, C; Suhonen, J; Suvorov, V; Toma, G; Tonazzo, A; Trzaska, W H; Tsenov, R; Tuominen, K; Valram, M; Vankova-Kirilova, G; Vannucci, F; Vasseur, G; Velotti, F; Velten, P; Venturi, V; Viant, T; Vihonen, S; Vincke, H; Vorobyev, A; Weber, A; Wu, S; Yershov, N; Zambelli, L; Zito, M

    2014-01-01

    One of the main goals of the Long Baseline Neutrino Observatory (LBNO) is to study the $L/E$ behaviour (spectral information) of the electron neutrino and antineutrino appearance probabilities, in order to determine the unknown CP-violation phase $\\delta_{CP}$ and discover CP-violation in the leptonic sector. The result is based on the measurement of the appearance probabilities in a broad range of energies, covering t he 1st and 2nd oscillation maxima, at a very long baseline of 2300 km. The sensitivity of the experiment can be maximised by optimising the energy spectra of the neutrino and anti-neutrino fluxes. Such an optimisation requires exploring an extended range of parameters describing in details the geometries and properties of the primary protons, hadron target and focusing elements in the neutrino beam line. In this paper we present a numerical solution that leads to an optimised energy spectra and study its impact on the sensitivity of LBNO to discover leptonic CP violation. In the optimised flux ...

  10. Optimisation of the Population Monte Carlo algorithm: Application to constraining isocurvature models with cosmic microwave background data

    CERN Document Server

    Moodley, Darell

    2015-01-01

    We optimise the parameters of the Population Monte Carlo algorithm using numerical simulations. The optimisation is based on an efficiency statistic related to the number of samples evaluated prior to convergence, and is applied to a D-dimensional Gaussian distribution to derive optimal scaling laws for the algorithm parameters. More complex distributions such as the banana and bimodal distributions are also studied. We apply these results to a cosmological parameter estimation problem that uses CMB anisotropy data from the WMAP nine-year release to constrain a six parameter adiabatic model and a fifteen parameter admixture model, consisting of correlated adiabatic and isocurvature perturbations. In the case of the adiabatic model and the admixture model we find respective degradation factors of three and twenty, relative to the optimal Gaussian case, due to degeneracies in the underlying parameter space. The WMAP nine-year data constrain the admixture model to have an isocurvature fraction of at most $36.3 \\...

  11. Identification of the mechanical behaviour of biopolymer composites using multistart optimisation technique

    KAUST Repository

    Brahim, Elhacen

    2013-10-01

    This paper aims at identifying the mechanical behaviour of starch-zein composites as a function of zein content using a novel optimisation technique. Starting from bending experiments, force-deflection response is used to derive adequate mechanical parameters representing the elastic-plastic behaviour of the studied material. For such a purpose, a finite element model is developed accounting for a simple hardening rule, namely isotropic hardening model. A deterministic optimisation strategy is implemented to provide rapid matching between parameters of the constitutive law and the observed behaviour. Results are discussed based on the robustness of the numerical approach and predicted tendencies with regards to the role of zein content. © 2013 Elsevier Ltd.

  12. Cellular internalisation kinetics and cytotoxic properties of statistically designed and optimised neo-geometric copper nanocrystals.

    Science.gov (United States)

    Murugan, Karmani; Choonara, Yahya E; Kumar, Pradeep; du Toit, Lisa C; Pillay, Viness

    2017-09-01

    This study aimed to highlight a statistic design to precisely engineer homogenous geometric copper nanoparticles (CuNPs) for enhanced intracellular drug delivery as a function of geometrical structure. CuNPs with a dual functionality comprising geometric attributes for enhanced cell uptake and exerting cytotoxic activity on proliferating cells were synthesized as a novel drug delivery system. This paper investigated the defined concentrations of two key surfactants used in the reaction to mutually control and manipulate nano-shape and optimisation of the geometric nanosystems. A statistical experimental design comprising a full factorial model served as a refining factor to achieve homogenous geometric nanoparticles using a one-pot method for the systematic optimisation of the geometric CuNPs. Shapes of the nanoparticles were investigated to determine the result of the surfactant variation as the aim of the study and zeta potential was studied to ensure the stability of the system and establish a nanosystem of low aggregation potential. After optimisation of the nano-shapes, extensive cellular internalisation studies were conducted to elucidate the effect of geometric CuNPs on uptake rates, in addition to the vital toxicity assays to further understand the cellular effect of geometric CuNPs as a drug delivery system. In addition to geometry; volume, surface area, orientation to the cell membrane and colloidal stability is also addressed. The outcomes of the study demonstrated the success of homogenous geometric NP formation, in addition to a stable surface charge. The findings of the study can be utilized for the development of a drug delivery system for promoted cellular internalisation and effective drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    DEFF Research Database (Denmark)

    Helle, K.B.; Müller, T.O.; Astrup, Poul;

    2014-01-01

    chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope...... monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results...

  14. Real-time optimisation of the Hoa Binh reservoir, Vietnam

    DEFF Research Database (Denmark)

    Richaud, Bertrand; Madsen, Henrik; Rosbjerg, Dan

    2011-01-01

    -time optimisation. First, the simulation-optimisation framework is applied for optimising reservoir operating rules. Secondly, real-time and forecast information is used for on-line optimisation that focuses on short-term goals, such as flood control or hydropower generation, without compromising the deviation......Multi-purpose reservoirs often have to be managed according to conflicting objectives, which requires efficient tools for trading-off the objectives. This paper proposes a multi-objective simulation-optimisation approach that couples off-line rule curve optimisation with on-line real...... of the forecast is addressed. The results illustrate the importance of a sufficient forecast lead time to start pre-releasing water in flood situations....

  15. Optimisation of interdigitated back contacts solar cells by two-dimensional numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nichiporuk, O.; Kaminski, A.; Lemiti, M.; Fave, A. [Instituit National des Sciences Appliquees Lyon, Villeurbanne (France). Lab. de Physique de la Matiere; Skryshevsky, V. [National Taras Shevchenko Univ., Kiev (Ukraine). Radiophysics Dept.

    2005-04-01

    In this paper we present the results of the simulation of interdigitated back contacts solar cell on thin-film ({approx}{mu}m) silicon layer. The influence of several parameters (surface recombination rate, substrate thickness and type, diffusion length, device geometry, doping levels) on device characteristics are simulated using the accurate two-dimensional numerical simulator DESSIS that allows to optimise the cell design. (Author)

  16. Using multi-objective optimisation to integrate alpine regions in groundwater flow models

    Directory of Open Access Journals (Sweden)

    V. Rojanschi

    2005-01-01

    Full Text Available Within the research project GLOWA Danube, a groundwater flow model was developed for the Upper Danube basin. This paper reports on a preliminary study to include the alpine part of the catchment in the model. A conceptual model structure was implemented and tested using multi-objective optimisation analysis. The performance of the model and the identifiability of the parameters were studied. A possible over-parameterisation of the model was also tested using principal component analysis.

  17. Mechatronic System Design Based On An Optimisation Approach

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael Rygaard

    The envisaged objective of this paper project is to extend the current state of the art regarding the design of complex mechatronic systems utilizing an optimisation approach. We propose to investigate a novel framework for mechatronic system design. The novelty and originality being the use...... of optimisation techniques. The methods used to optimise/design within the classical disciplines will be identified and extended to mechatronic system design....

  18. Analysis of Strength on Thick Plate Part using Genetic Algorithm Optimisation Method

    Directory of Open Access Journals (Sweden)

    Azlan S.M.

    2016-01-01

    Full Text Available This study focuses on the optimisation of the injection moulding parameters to maximise the strength ofmoulded parts using a simulation software. The moulded parts were injected with Acrylonitrile- Butadiene-Styrene (ABS whereas mould temperature, melt temperature, packing pressure and packing time were selected as variable process parameters. The polynomial model obtained using Design of Experiment (DOE was integrated with the Response Surface Methodology (RSM and Centre Composite Design (CCD. The RSM was supported with Genetic Algorithm (GA to anticipate the optimum value of processing parameters with the highest strength. It was found that strength of the parts can be improved 2.2% using the methodology reported herein.

  19. Estimation of the Influence of Power System Mathematical Model Parameter Uncertainty on PSS2A System Stabilizers

    Directory of Open Access Journals (Sweden)

    Adrian Nocoń

    2015-09-01

    Full Text Available This paper presents an analysis of the influence of uncertainty of power system mathematical model parameters on optimised parameters of PSS2A system stabilizers. Optimisation of power system stabilizer parameters was based on polyoptimisation (multi-criteria optimisation. Optimisation criteria were determined for disturbances occurring in a multi-machine power system, when taking into account transient waveforms associated with electromechanical swings (instantaneous power, angular speed and terminal voltage waveforms of generators. A genetic algorithm with floating-point encoding, tournament selection, mean crossover and perturbative mutations, modified for the needs of investigations, was used for optimisation. The impact of uncertainties on the quality of operation of power system stabilizers with optimised parameters has been evaluated using various deformation factors.

  20. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Science.gov (United States)

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  1. Robust Optimisation Approach for Vehicle Routing Problems with Uncertainty

    Directory of Open Access Journals (Sweden)

    Liang Sun

    2015-01-01

    Full Text Available We formulated a solution procedure for vehicle routing problems with uncertainty (VRPU for short with regard to future demand and transportation cost. Unlike E-SDROA (expectation semideviation robust optimisation approach for solving the proposed problem, the formulation focuses on robust optimisation considering situations possibly related to bidding and capital budgets. Besides, numerical experiments showed significant increments in the robustness of the solutions without much loss in solution quality. The differences and similarities of the robust optimisation model and existing robust optimisation approaches were also compared.

  2. GIS-based approach for optimised collection of household waste in Mostaganem city (Western Algeria).

    Science.gov (United States)

    Abdelli, I S; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A

    2016-05-01

    This work proposes an optimisation of municipal solid waste collection in terms of collection cost and polluting emissions (carbon oxides, carbon dioxides, nitrogen oxides and particulate matter). This method is based on a simultaneous optimisation of the vehicles routing (distance and time travelled) and the routing system for household wastes collection based on the existing network of containers, the capacity of vehicles and the quantities generated in every collecting point. The process of vehicle routing optimisation involves a geographical information system. This optimisation has enabled a reduction of travelled distances, collection time, fuel consumption and polluting emissions. Pertinent parameters affecting the fuel consumption have been utilised, such as the state of the road, the vehicles speed in the different paths, the vehicles load and collection frequencies. Several scenarios have been proposed. The results show the importance of the construction of a waste transfer station that can reduce the cost of household waste collection and emissions of waste transfer pollutants. Among the proposed five scenarios, we have noticed that the fourth scenario (by constructing a waste transfer centre) was the most performing. So, the routes of optimised travelled distance of the new circuits have been reduced by 71.81%. The fuel consumption has been reduced by 72.05% and the total cost of the collection has been reduced by 46.8%. For the polluting emissions, the reduction has been by 60.2% for carbon oxides, by 67.9% for carbon dioxides, by 74.2% for nitrogen oxides and by 65% for particulate matter.

  3. Biorefinery plant design, engineering and process optimisation

    DEFF Research Database (Denmark)

    Holm-Nielsen, Jens Bo; Ehimen, Ehiazesebhor Augustine

    2014-01-01

    applicable for the planning and upgrading of intended biorefinery systems, and includes discussions on the operation of an existing lignocellulosic-based biorefinery platform. Furthermore, technical considerations and tools (i.e., process analytical tools) which could be applied to optimise the operations......Before new biorefinery systems can be implemented, or the modification of existing single product biomass processing units into biorefineries can be carried out, proper planning of the intended biorefinery scheme must be performed initially. This chapter outlines design and synthesis approaches...

  4. SIROCCO. Silent rotors by acoustic optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Schepers, J.G.; Curvers, A. [ECN Wind Energy, Petten (Netherlands); Oerlemans, S. [National Aerospace Laboratory NLR, Amsterdam (Netherlands); Braun, K.; Lutz, T.; Herrig, A.; Wuerz, W. [University of Stuttgart, Stuttgart (Germany); Matesanz, A.; Garcillan, L. [Gamesa Eolica, Madrid (Spain); Fisher, M.; Koegler, K.; Maeder, T. [GE Wind Energy/GE Global Research (United States)

    2007-07-15

    In this paper the results from the European 5th Framework project 'SIROCCO' are described. The project started in January 2003 and will end in August 2007. The main aim of the SIROCCO project is to reduce wind-turbine aerodynamic noise significantly while maintaining the aerodynamic performance. This is achieved by designing new acoustically and aerodynamically optimised airfoils for the outer part of the blade. The project focussed primarily on reducing trailing edge noise, which was broadly believed to be the dominant noise mechanism of modern wind turbines.

  5. An integrated and dynamic optimisation model for the multi-level emergency logistics network in anti-bioterrorism system

    Science.gov (United States)

    Liu, Ming; Zhao, Lindu

    2012-08-01

    Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.

  6. Ant Colony Optimisation for Backward Production Scheduling

    Directory of Open Access Journals (Sweden)

    Leandro Pereira dos Santos

    2012-01-01

    Full Text Available The main objective of a production scheduling system is to assign tasks (orders or jobs to resources and sequence them as efficiently and economically (optimised as possible. Achieving this goal is a difficult task in complex environment where capacity is usually limited. In these scenarios, finding an optimal solution—if possible—demands a large amount of computer time. For this reason, in many cases, a good solution that is quickly found is preferred. In such situations, the use of metaheuristics is an appropriate strategy. In these last two decades, some out-of-the-shelf systems have been developed using such techniques. This paper presents and analyses the development of a shop-floor scheduling system that uses ant colony optimisation (ACO in a backward scheduling problem in a manufacturing scenario with single-stage processing, parallel resources, and flexible routings. This scenario was found in a large food industry where the corresponding author worked as consultant for more than a year. This work demonstrates the applicability of this artificial intelligence technique. In fact, ACO proved to be as efficient as branch-and-bound, however, executing much faster.

  7. Noise aspects at aerodynamic blade optimisation projects

    Energy Technology Data Exchange (ETDEWEB)

    Schepers, J.G. [Netherlands Energy Research Foundation, Petten (Netherlands)

    1997-12-31

    This paper shows an example of an aerodynamic blade optimisation, using the program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. The aerodynamic optimised geometry from PVOPT is the `real` optimum (up to the latest decimal). The most important conclusion from this study is, that it is worthwhile to investigate the behaviour of the objective function (in the present case the energy yield) around the optimum: If the optimum is flat, there is a possibility to apply modifications to the optimum configuration with only a limited loss in energy yield. It is obvious that the modified configurations emits a different (and possibly lower) noise level. In the BLADOPT program (the successor of PVOPT) it will be possible to quantify the noise level and hence to assess the reduced noise emission more thoroughly. At present the most promising approaches for noise reduction are believed to be a reduction of the rotor speed (if at all possible), and a reduction of the tip angle by means of low lift profiles, or decreased twist at the outboard stations. These modifications were possible without a significant loss in energy yield. (LN)

  8. Multi-objective evolutionary optimisation for product design and manufacturing

    CERN Document Server

    2011-01-01

    Presents state-of-the-art research in the area of multi-objective evolutionary optimisation for integrated product design and manufacturing Provides a comprehensive review of the literature Gives in-depth descriptions of recently developed innovative and novel methodologies, algorithms and systems in the area of modelling, simulation and optimisation

  9. Optimisation of GnRH antagonist use in ART

    NARCIS (Netherlands)

    Hamdine, O.

    2014-01-01

    This thesis focuses on the optimisation of controlled ovarian stimulation for IVF using exogenous FSH and GnRH antagonist co-treatment, by studying the timing of the initiation of GnRH antagonist co-medication and the role of ovarian reserve markers in optimising ovarian response and reproductive ou

  10. Aerodynamic shape parameterisation and optimisation of novel configurations

    NARCIS (Netherlands)

    Straathof, M.H.; Van Tooren, M.J.L.; Voskuijl, M.; Koren, B.

    2008-01-01

    The Multi-Disciplinary Design Optimisation (MDO) process can be supported by partial automation of analysis and optimisation steps. Design and Engineering Engines (DEE) are useful concepts to structure this type of automation. Within the DEE, a product can be parameterically defined using Knowledge

  11. DACIA LOGAN LIVE AXLE OPTIMISATION USING COMPUTER GRAPHICS

    Directory of Open Access Journals (Sweden)

    KIRALY Andrei

    2017-05-01

    Full Text Available The paper presents some contributions to the calculus and optimisation of a live axle used at Dacia Logan using computer graphics software for creating the model and afterwards using FEA evaluation to determine the effectiveness of the optimisation. Thus using specialized computer software, a simulation is made and the results were compared to the measured real prototype.

  12. There, and Back Again Quantum Theory and Global Optimisation

    CERN Document Server

    Audenaert, K M R

    2004-01-01

    We consider a problem in quantum theory that can be formulated as an optimisation problem and present a global optimisation algorithm for solving it, the foundation of which relies in turn on a theorem from quantum theory. To wit, we consider the maximal output purity $\

  13. Kriging based robust optimisation algorithm for minimax problems in electromagnetics

    Directory of Open Access Journals (Sweden)

    Li Yinjiang

    2016-12-01

    Full Text Available The paper discusses some of the recent advances in kriging based worst-case design optimisation and proposes a new two-stage approach to solve practical problems. The efficiency of the infill points allocation is improved significantly by adding an extra layer of optimisation enhanced by a validation process.

  14. GAOS: Spatial optimisation of crop and nature within agricultural fields

    NARCIS (Netherlands)

    Bruin, de S.; Janssen, H.; Klompe, A.; Lerink, P.; Vanmeulebrouk, B.

    2010-01-01

    This paper proposes and demonstrates a spatial optimiser that allocates areas of inefficient machine manoeuvring to field margins thus improving the use of available space and supporting map-based Controlled Traffic Farming. A prototype web service (GAOS) allows farmers to optimise tracks within the

  15. Multi-criterion scantling optimisation of cruise ships

    OpenAIRE

    2010-01-01

    A numerical tool for the optimisation of the scantlings of a ship is extended by considering production cost, weight and moment of iner tia in the objective function. A multi-criteria optimisation of a passenger ship is conducted to illustrate the analysis process. Pareto frontiers are obtained and results are verified with Bureau Veritas rules.

  16. Validation of optimised population synthesis through mock spectra and Galactic globular clusters

    CERN Document Server

    Barber, Christopher; Roediger, Joel; Schiavon, Ricardo

    2014-01-01

    Optimised population synthesis provides an empirical method to extract the relative mix of stellar evolutionary stages and the distribution of atmospheric parameters within unresolved stellar systems, yet a robust validation of this method is still lacking. We here provide a calibration of population synthesis via non-linear bound-constrained optimisation of stellar populations based upon optical spectra of mock stellar systems and observed Galactic Globular Clusters (GGCs). The MILES stellar library is used as a basis for mock spectra as well as templates for the synthesis of deep GGC spectra from Schiavon et al. (2005). Optimised population synthesis applied to mock spectra recovers mean light-weighted stellar atmospheric parameters to within a mean uncertainty of 240 K, 0.04 dex, and 0.03 dex for T_eff, log(g), and [Fe/H], respectively. Decompositions of both mock and GGC spectra confirm the method's ability to recover the expected mean light-weighted metallicity in dust-free conditions (E[B-V] < 0.15) ...

  17. Optimisation of the biological pretreatment of wheat straw with white-rot fungi for ethanol production.

    Science.gov (United States)

    López-Abelairas, M; Álvarez Pallín, M; Salvachúa, D; Lú-Chau, T; Martínez, M J; Lema, J M

    2013-09-01

    The biological pretreatment of lignocellulosic biomass for the production of bioethanol is an environmentally friendly alternative to the most frequently used process, steam explosion (SE). However, this pretreatment can still not be industrially implemented due to long incubation times. The main objective of this work was to test the viability of and optimise the biological pretreatment of lignocellulosic biomass, which uses ligninolytic fungi (Pleurotus eryngii and Irpex lacteus) in a solid-state fermentation of sterilised wheat straw complemented with a mild alkali treatment. In this study, the most important parameters of the mechanical and thermal substrate conditioning processes and the most important parameters of the fungal fermentation process were optimised to improve sugar recovery. The largest digestibilities were achieved with fermentation with I. lacteus under optimised conditions, under which cellulose and hemicellulose digestibility increased after 21 days of pretreatment from 16 to 100 % and 12 to 87 %, respectively. The maximum glucose yield (84 %) of cellulose available in raw material was obtained after only 14 days of pretreatment with an overall ethanol yield of 74 % of the theoretical value, which is similar to that reached with SE.

  18. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is ... encourage linking to this site. × Recommend RadiologyInfo to a friend Send to (friend's e-mail address): From ( ...

  19. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info Images/Videos About Us News Physician ... of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) is also known as a thyroid uptake. ...

  20. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is ... encourage linking to this site. × Recommend RadiologyInfo to a friend Send to (friend's e-mail address): From ( ...

  1. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info Images/Videos About Us News Physician ... of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) is also known as a thyroid uptake. ...

  2. Optimising rigid motion compensation for small animal brain PET imaging

    Science.gov (United States)

    Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan

    2016-10-01

    Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.

  3. On the Selection of MAC Optimised Routing Protocol for VANET

    Directory of Open Access Journals (Sweden)

    Kanu Priya

    2017-02-01

    Full Text Available In today‘s era of modernization, the concept of smart vehicles, smart cities and automated vehicles is trending day by day. VANET (Vehicular Adhoc Network has also been emerging as a potential applicant to enable these smart applications. Though VANET is very much similar to MANET (Mobile Adhoc Network but VANET has more severe challenges as compared to MANET due to hostile channel conditions and high degree of mobility. So lot of work related to MAC and Network Layer need attention from the network designers. In this paper MAC Layer has been optimised in terms of Queue Size by using QoS Parameters namely Packet Collision Rate, Packet Drop Rate, Throughput Rate and Broadcast Rate. In doing so, simulative investigations have been done to find out optimum queue size. For this purpose various routing protocols namely DSDV, AODV, ADV and GOD have been considered and optimum queue length for each of these have been obtained. Further the most efficient routing protocol has also been identified. Moreover this paper also compares the performance of most efficient Routing Protocols selected in terms of QoS parameters for different MAC Interfaces.

  4. Optimisation and Characterisation of Glass RPC for India-based Neutrino Observatory Detectors

    CERN Document Server

    Kanishka, R; Indumathi, D

    2016-01-01

    The proposed magnetised Iron CALorimeter detector (ICAL) to be built in the India-based Neutrino Observatory (INO) laboratory aims to detect atmospheric muon neutrinos. In order to achieve improved physics results, the constituent components of the detector must be fully understood by proper characterisation and optimisation of various parameters. Resistive Plate Chambers (RPCs) are the active detector elements in the ICAL detector and can be made of glass or bakelite. The number of RPCs required for this detector is very large number so a detailed R & D is necessary to establish the characterisation and optimisation of these RPCs. These detectors once installed will be taking data for 15-20 years. In this paper, we report the selection criteria of glass used of various Indian manufacturers such as Asahi, Saint Gobain and Modi. Based on the factors like aging that deteriorate the quality of glass the choice is made. The glass characterisation studies include UV-VIS transmission for optical properties, SEM...

  5. Modified cuckoo search: A new gradient free optimisation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Walton, S., E-mail: 512465@swansea.ac.uk [College of Engineering, Swansea University, Swansea SA2 8PP, Wales (United Kingdom); Hassan, O.; Morgan, K.; Brown, M.R. [College of Engineering, Swansea University, Swansea SA2 8PP, Wales (United Kingdom)

    2011-09-15

    Highlights: > Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. > MCS shows a high convergence rate, able to outperform other optimisers. > MCS is particularly strong at high dimension objective functions. > MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  6. Optimisation of a Crossdocking Distribution Centre Simulation Model

    CERN Document Server

    Adewunmi, Adrian

    2010-01-01

    This paper reports on continuing research into the modelling of an order picking process within a Crossdocking distribution centre using Simulation Optimisation. The aim of this project is to optimise a discrete event simulation model and to understand factors that affect finding its optimal performance. Our initial investigation revealed that the precision of the selected simulation output performance measure and the number of replications required for the evaluation of the optimisation objective function through simulation influences the ability of the optimisation technique. We experimented with Common Random Numbers, in order to improve the precision of our simulation output performance measure, and intended to use the number of replications utilised for this purpose as the initial number of replications for the optimisation of our Crossdocking distribution centre simulation model. Our results demonstrate that we can improve the precision of our selected simulation output performance measure value using C...

  7. Open Cut resource Optimisation as applied to coal

    Institute of Scientific and Technical Information of China (English)

    Martin L.Smith

    2007-01-01

    Pit optimisation is the earliest and most established application of its kind in the minerals industry,but this has been primarily driven by metal,not coal.Coal has the same financiaI drivers for resource optimisation as does the metalliferous industry,yet pit optimisation is not common practice.Why? The following discussion presents the basics of pit optimisation as they relate to coal and illustrates how a technology developed for massive deposits is not suitable for thin.multi-seam deposits where mine planning is often driven more by product quality than by value drivers such as Net Present Value.An alternative methodology is presented that takes advantage of the data structure of bedded deposits to optimise resource recovery in terms of a production schedule that meets constraints on coal quality.

  8. COMPLIANCE ANALYSIS, OPTIMISATION AND COMPARISON OF A NEW 3PUS-PU MECHANISM

    Directory of Open Access Journals (Sweden)

    B. Wei

    2013-06-01

    Full Text Available This paper investigates the compliance of a new 3PUS-PU hybrid mechanism with three degrees of freedom, including translation along the Z axis and rotations about the X and Y axes. Firstly, the kinematic analysis of the mechanism is analysed and the compliance model of the mechanism derived. Secondly, the effects the geometric parameters and position and orientation parameters on the compliance of the mechanism in each direction are investigated, and the genetic algorithm is used to optimise the global compliance by simultaneously adjusting design variables. Finally, the compliance of two similar kinds of 3PUS-PU mechanism in each direction is reviewed.

  9. Influenza-vaccination: an inventory of strategies to reach the target population and optimise vaccination uptake.

    NARCIS (Netherlands)

    Kroneman, M.; Paget, J.

    2002-01-01

    Background: Influenza continues to be a considerable health problem of the populations in Europe. Complications of influenza are especially present in elderly patients and patients with chronic conditions such as cardiovascular disorders and respiratory disorders. Vaccination is an effective

  10. Specification, Verification and Optimisation of Business Processes

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas

    to model checking. This allows for a rich set of both qualitative and quantitative properties of a business process to be precisely determined in an automated fashion directly from the model of the business process. A number of advanced applications of this framework are presented which allow for automated...... Model and Notation (BPMN). The automated analysis of business processes is done by means of quantitative probabilistic model checking which allows verification of validation and performance properties through use of an algorithm for the translation of business process models into a format amenable......This thesis develops a unified framework wherein to specify, verify and optimise stochastic business processes. This framework provides for the modelling of business processes via a mathematical structure which captures business processes as a series of connected activities. This structure...

  11. FEM Optimisation of Spark Plasma Sintering Furnace

    CERN Document Server

    Kellari, Demetrios Vasili

    2013-01-01

    Coupled electro-thermal FEM analysis has been carried out on a sintering furnace used to produce new materials for LHC collimators. The analysis showed there exist margins for improvement of the current process and equipment through minor changes. To optimise the design of the furnace several design changes have been proposed including: optimization of material selection using copper cooling plates, control of convection in cooling plates by lowering the water flow rate, modifying the electrode shape using unsymmetrical electrodes and upgrading the thermal shielding to make use of multilayer graphite shields. The results show that we have a significant improvement in temperature gradient on the plate, from 453 to 258 °C and a reduction in power requirement from 62 to 44 kW.

  12. Improving and optimising road pricing in Copenhagen

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Larsen, Marie Karen

    2008-01-01

    though quite a number of proposed charging systems have been examined only a few pricing strategies have been investigated. This paper deals with the optimisation of different designs for a road pricing system in the Greater Copenhagen area with respect to temporal and spatial differentiation......The question whether to introduce toll rings or road pricing in Copenhagen has been discussed intensively during the last 10 years. The main results of previous analyses are that none of the systems would make a positive contribution at present, when considered from a socio-economic view. Even...... of the pricing levels. A detailed transport model was used to describe the demand effects. The model was based on data from a real test of road pricing on 500 car drivers. The paper compares the price systems with regard to traffic effects and generalised costs for users and society. It is shown how important...

  13. Optimising Signalised Intersection Using Wireless Vehicle Detectors

    DEFF Research Database (Denmark)

    Adjin, Daniel Michael Okwabi; Torkudzor, Moses; Asare, Jack

    Traffic congestion on roads wastes travel times. In this paper, we developed a vehicular traffic model to optimise a signalised intersection in Accra, using wireless vehicle detectors. Traffic volume gathered was extrapolated to cover 2011 and 2016 and were analysed to obtain the peak hour traffic...... volume causing congestion. The intersection was modelled and simulated in Synchro7 as an actuated signalised model using results from the analysed data. The model for morning peak periods gave optimal cycle lengths of 100s and 150s with corresponding intersection delay of 48.9s and 90.6s in 2011 and 2016...... respectively while that for the evening was 55s giving delay of 14.2s and 16.3s respectively. It is shown that the model will improve traffic flow at the intersection....

  14. Simulation and optimisation of turbulence in stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Xanthopoulos, Pavlos; Helander, Per; Turkin, Yuriy; Plunk, Gabriel G.; Bird, Thomas; Proll, Josefine H.E. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Wendelsteinstr. 1, 17491 Greifswald (Germany); Mynick, Harry [Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Jenko, Frank; Goerler, Tobias; Told, Daniel [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2014-07-01

    In tokamaks and stellarators - two leading types of devices used in fusion research - magnetic field lines trace out toroidal surfaces on which the plasma density and temperature are constant, but turbulent fluctuations carry energy across these surfaces to the wall, thus degrading the plasma confinement. Using petaflop-scale simulations, we calculate for the first time the pattern of turbulent structures forming on stellarator magnetic surfaces, and find striking differences relative to tokamaks. The observed sensitivity of the turbulence to the magnetic geometry suggests that there is room for further confinement improvement, in addition to measures already taken to minimise the laminar transport. With an eye towards fully optimised stellarators, we present a proof-of-principle configuration with substantially reduced turbulence compared to an existing design.

  15. Global optimisation methods for poroelastic material characterisation using a clamped sample in a Kundt tube setup

    Science.gov (United States)

    Vanhuyse, Johan; Deckers, Elke; Jonckheere, Stijn; Pluymers, Bert; Desmet, Wim

    2016-02-01

    The Biot theory is commonly used for the simulation of the vibro-acoustic behaviour of poroelastic materials. However, it relies on a number of material parameters. These can be hard to characterize and require dedicated measurement setups, yielding a time-consuming and costly characterisation. This paper presents a characterisation method which is able to identify all material parameters using only an impedance tube. The method relies on the assumption that the sample is clamped within the tube, that the shear wave is excited and that the acoustic field is no longer one-dimensional. This paper numerically shows the potential of the developed method. It therefore performs a sensitivity analysis of the quantification parameters, i.e. reflection coefficients and relative pressures, and a parameter estimation using global optimisation methods. A 3-step procedure is developed and validated. It is shown that even in the presence of numerically simulated noise this procedure leads to a robust parameter estimation.

  16. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... uptake are not performed on patients who are pregnant because of the risk of exposing the fetus to radiation. These tests are also not recommended for breastfeeding women. Nuclear medicine procedures can be time consuming. It ...

  17. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... is taken by mouth, in either liquid or capsule form, it is typically swallowed up to 24 ... I-123 or I-131) in liquid or capsule form to swallow. The thyroid uptake will begin ...

  18. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... which are encased in metal and plastic and most often shaped like a box, attached to a ... will I experience during and after the procedure? Most thyroid scan and thyroid uptake procedures are painless. ...

  19. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... scan and uptake uses small amounts of radioactive materials called radiotracers, a special camera and a computer ... last two months that used iodine-based contrast material. Your doctor will instruct you on how to ...

  20. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... eat for several hours before your exam because eating can affect the accuracy of the uptake measurement. ... often unattainable using other imaging procedures. For many diseases, nuclear medicine scans yield the most useful information ...

  1. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... RAIU) is also known as a thyroid uptake. It is a measurement of thyroid function, but does ... they offer the potential to identify disease in its earliest stages as well as a patient’s immediate ...

  2. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... time for the imaging to begin, you will sit in a chair facing a stationary probe positioned ... counter used for thyroid uptake exams. The patient sits with the camera directed at the neck for ...

  3. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... for several hours before your exam because eating can affect the accuracy of the uptake measurement. Jewelry ... small hand-held device resembling a microphone that can detect and measure the amount of the radiotracer ...

  4. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... information about your thyroid’s size, shape, position and function that is often unattainable using other imaging procedures. ... thyroid uptake. It is a measurement of thyroid function, but does not involve imaging. Nuclear medicine is ...

  5. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) is ... thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that uses ...

  6. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... RAIU) is also known as a thyroid uptake. It is a measurement of thyroid function, but does ... they offer the potential to identify disease in its earliest stages as well as a patient’s immediate ...

  7. Profile control studies for JET optimised shear regime

    Energy Technology Data Exchange (ETDEWEB)

    Litaudon, X.; Becoulet, A.; Eriksson, L.G.; Fuchs, V.; Huysmans, G.; How, J.; Moreau, D.; Rochard, F.; Tresset, G.; Zwingmann, W. [Association Euratom-CEA, CEA/Cadarache, Dept. de Recherches sur la Fusion Controlee, DRFC, 13 - Saint-Paul-lez-Durance (France); Bayetti, P.; Joffrin, E.; Maget, P.; Mayorat, M.L.; Mazon, D.; Sarazin, Y. [JET Abingdon, Oxfordshire (United Kingdom); Voitsekhovitch, I. [Universite de Provence, LPIIM, Aix-Marseille 1, 13 (France)

    2000-03-01

    This report summarises the profile control studies, i.e. preparation and analysis of JET Optimised Shear plasmas, carried out during the year 1999 within the framework of the Task-Agreement (RF/CEA/02) between JET and the Association Euratom-CEA/Cadarache. We report on our participation in the preparation of the JET Optimised Shear experiments together with their comprehensive analyses and the modelling. Emphasis is put on the various aspects of pressure profile control (core and edge pressure) together with detailed studies of current profile control by non-inductive means, in the prospects of achieving steady, high performance, Optimised Shear plasmas. (authors)

  8. Application of optimisation techniques in groundwater quantity and quality management

    Indian Academy of Sciences (India)

    Amlan Das; Bithin Datta

    2001-08-01

    This paper presents the state-of-the-art on application of optimisation techniques in groundwater quality and quantity management. In order to solve optimisation-based groundwater management models, researchers have used various mathematical programming techniques such as linear programming (LP), nonlinear programming (NLP), mixed-integer programming (MIP), optimal control theory-based mathematical programming, differential dynamic programming (DDP), stochastic programming (SP), combinatorial optimisation (CO), and multiple objective programming for multipurpose management. Studies reported in the literature on the application of these methods are reviewed in this paper.

  9. Genetic Algorithm Optimisation of a Ship Navigation System

    Directory of Open Access Journals (Sweden)

    E. Alfaro-Cid

    2001-01-01

    Full Text Available The optimisation of the PID controllers' gains for separate propulsion and heading control systems of CyberShip I, a scale model of an oil platform supply ship, using Genetic Algorithms is considered. During the initial design process both PID controllers have been manually tuned to improve their performance. However this tuning approach is a tedious and time consuming process. A solution to this problem is the use of optimisation techniques based on Genetic Algorithms to optimise the controllers' gain values. This investigation has been carried out through computer-generated simulations based on a non-linear hydrodynamic model of CyberShip I.

  10. The Cramer-Rao Bound and DMT Signal Optimisation for the Identification of a Wiener-Type Model

    Directory of Open Access Journals (Sweden)

    H. Koeppl

    2004-09-01

    Full Text Available In linear system identification, optimal excitation signals can be determined using the Cramer-Rao bound. This problem has not been thoroughly studied for the nonlinear case. In this work, the Cramer-Rao bound for a factorisable Volterra model is derived. The analytical result is supported with simulation examples. The bound is then used to find the optimal excitation signal out of the class of discrete multitone signals. As the model is nonlinear in the parameters, the bound depends on the model parameters themselves. On this basis, a three-step identification procedure is proposed. To illustrate the procedure, signal optimisation is explicitly performed for a third-order nonlinear model. Methods of nonlinear optimisation are applied for the parameter estimation of the model. As a baseline, the problem of optimal discrete multitone signals for linear FIR filter estimation is reviewed.

  11. Towards ‘smart lasers’: self-optimisation of an ultrafast pulse source using a genetic algorithm

    Science.gov (United States)

    Woodward, R. I.; Kelleher, E. J. R.

    2016-11-01

    Short-pulse fibre lasers are a complex dynamical system possessing a broad space of operating states that can be accessed through control of cavity parameters. Determination of target regimes is a multi-parameter global optimisation problem. Here, we report the implementation of a genetic algorithm to intelligently locate optimum parameters for stable single-pulse mode- locking in a Figure-8 fibre laser, and fully automate the system turn-on procedure. Stable ultrashort pulses are repeatably achieved by employing a compound fitness function that monitors both temporal and spectral output properties of the laser. Our method of encoding photonics expertise into an algorithm and applying machine-learning principles paves the way to self-optimising ‘smart’ optical technologies.

  12. Optimisation of phase ratio in the triple jump using computer simulation.

    Science.gov (United States)

    Allen, Sam J; King, Mark A; Yeadon, M R Fred

    2016-04-01

    The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity.

  13. Quantifying the mechanical properties of human skin to optimise future microneedle device design.

    Science.gov (United States)

    Groves, R B; Coulman, S A; Birchall, J C; Evans, S L

    2012-01-01

    Microneedle devices are a promising minimally invasive means of delivering drugs/vaccines across or into the skin. However, there is currently a diversity of microneedle designs and application methods that have, primarily, been intuitively developed by the research community. To enable the rational design of optimised microneedle devices, a greater understanding of human skin biomechanics under small deformations is required. This study aims to develop a representative stratified model of human skin, informed by in vivo data. A multilayer finite element model incorporating the epidermis, dermis and hypodermis was established. This was correlated with a series of in-vivo indentation measurements, and the Ogden material coefficients were optimised using a material parameter extraction algorithm. The finite element simulation was subsequently used to model microneedle application to human skin before penetration and was validated by comparing these predictions with the in-vivo measurements. Our model has provided an excellent tool to predict micron-scale human skin deformation in vivo and is currently being used to inform optimised microneedle designs.

  14. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    Science.gov (United States)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  15. Intelligent Internet-based information system optimises diabetes mellitus management in communities.

    Science.gov (United States)

    Wei, Xuejuan; Wu, Hao; Cui, Shuqi; Ge, Caiying; Wang, Li; Jia, Hongyan; Liang, Wannian

    2017-01-01

    To evaluate the effect of an intelligent Internet-based information system upon optimising the management of patients diagnosed with type 2 diabetes mellitus (T2DM). In 2015, a T2DM information system was introduced to optimise the management of T2DM patients for 1 year in Fangzhuang community of Beijing, China. A total of 602 T2DM patients who were registered in the health service centre of Fangzhuang community were enrolled based on an isometric sampling technique. The data from 587 patients were used in the final analysis. The intervention effect was subsequently assessed by statistically comparing multiple parameters, such as the prevalence of glycaemic control, standard health management and annual outpatient consultation visits per person, before and after the implementation of the T2DM information system. In 2015, a total of 1668 T2DM patients were newly registered in Fangzhuang community. The glycaemic control rate was calculated as 37.65% in 2014 and significantly elevated up to 62.35% in 2015 ( p information system, the rate of standard health management was increased from 48.04% to 85.01% ( p information system optimised the management of T2DM patients in Fangzhuang community and decreased the outpatient numbers in both community and general hospitals, which played a positive role in assisting T2DM patients and their healthcare providers to better manage this chronic illness.

  16. Multi-objective optimisation and decision-making of space station logistics strategies

    Science.gov (United States)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  17. Novel genetically optimised high-displacement piezoelectric actuator with efficient use of active material

    Science.gov (United States)

    Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari

    2017-09-01

    This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.

  18. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    Science.gov (United States)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  19. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    Science.gov (United States)

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation.

  20. Optimising functional properties during preparation of cowpea protein concentrate.

    Science.gov (United States)

    Mune Mune, Martin Alain; Minka, Samuel René; Mbome, Israël Lape

    2014-07-01

    Response surface methodology (RSM) was used for modelisation and optimisation of protein extraction parameters in order to obtain a protein concentrate with high functional properties. A central composite rotatable design of experiments was used to investigate the effects of two factors, namely pH and NaCl concentration, on six responses: water solubility index (WSI), water absorption capacity (WAC), oil holding capacity (OHC), emulsifying activity (EA), emulsifying stability (ES) and foam ability (FA). The results of analysis of variance (ANOVA) and correlation showed that the second-order polynomial model was appropriate to fit experimental data. The optimum condition was: pH 8.43 and NaCl concentration 0.25M, and under this condition WSI was ⩾17.20%, WAC⩾383.62%, OHC⩾1.75g/g, EA⩾0.15, ES⩾19.76min and FA⩾66.30%. The suitability of the model employed was confirmed by the agreement between the experimental and predicted values for functional properties.

  1. Optimisation of Graft Copolymerisation of Fibres from Banana Trunk

    Directory of Open Access Journals (Sweden)

    Richard Mpon

    2012-01-01

    Full Text Available Sheets from banana trunks were opened out and dried for several weeks in air. Pulp was obtained by the nitric acid process with a yield of 37.7% while fibres were obtained according to the modified standard Japanese method for cellulose in wood for pulp (JIS 8007 with a yield of 65% with respect to oven dried plant material. Single fibre obtained by the JIS method had an average diameter of 11.0 μm and Young's modulus, tensile strength and strain at break-off 7.05 GPa, 81.7 MPa and 5.2% respectively. Modification of the fibres was carried out by grafting ethyl acrylate in the presence of ammonium nitrate cerium(IV. Optimisation of the copolymerisation reaction conditions was studied by measuring the rate of conversion, the rate of grafting and the grafting efficiency. The results showed that at low values of ceric ion concentration (0.04 M, at ambient temperature, after three hours and at a concentration of 0.2 M ethyl acrylate, maximum values of the parameters cited were obtained.

  2. Optimising the turbocharging of large engines in the future

    Energy Technology Data Exchange (ETDEWEB)

    Codan, E. [ABB Turbo Systems, Ltd., R and D Turbocharging, Baden (Switzerland)

    1998-12-31

    The new ABB turbocharger generations TPL and TPS were developed to match the most advanced turbocharged engines over 500 kW of the coming years. High performance in terms of pressure ratio and turbocharging efficiency no longer guarantees an efficient engine operation over the whole operating field. Therefore matched turbocharger characteristics for different applications are increasingly important. This paper shows the influence of the turbocharging system characteristics on the steady state and transient behaviour of a turbocharged engine for different applications. Basis for the study is the well proven simulation system SiSy, which is widely used for the performance prediction of the turbocharging engine. Some simple parameters were developed that numerically describe the correlation between the characteristic of the turbocharging system and the engine operation. The limits of the commonly used turbocharging systems are shown together with an overview of future possibilities, e.g. two-stage turbocharging and turbocompound. A joint optimisation of the turbocharging system and of the engine will be of paramount importance in the future, to exploit the improvement potential. (au)

  3. Design optimisation of a flywheel hybrid vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Kok, D.B.

    1999-11-04

    This thesis describes the design optimisation of a flywheel hybrid vehicle with respect to fuel consumption and exhaust gas emissions. The driveline of this passenger car uses two power sources: a small spark ignition internal combustion engine with three-way catalyst, and a highspeed flywheel system for kinetic energy storage. A custom-made continuously variable transmission (CVT) with so-called i{sup 2} control transports energy between these power sources and the vehicle wheels. The driveline includes auxiliary systems for hydraulic, vacuum and electric purposes. In this fully mechanical driveline, parasitic energy losses determine the vehicle's fuel saving potential to a large extent. Practicable energy loss models have been derived to quantify friction losses in bearings, gearwheels, the CVT, clutches and dynamic seals. In addition, the aerodynamic drag in the flywheel system and power consumption of auxiliaries are charted. With the energy loss models available, a calculation procedure is introduced to optimise the flywheel as a subsystem in which the rotor geometry, the safety containment, and the vacuum system are designed for minimum energy use within the context of automotive applications. A first prototype of the flywheel system was tested experimentally and subsequently redesigned to improve rotordynamics and safety aspects. Coast-down experiments with the improved version show that the energy losses have been lowered significantly. The use of a kinetic energy storage device enables the uncoupling of vehicle wheel power and engine power. Therefore, the engine can be smaller and it can be chosen to operate in its region of best efficiency in start-stop mode. On a test-rig, the measured engine fuel consumption was reduced with more than 30 percent when the engine is intermittently restarted with the aid of the flywheel system. Although the start-stop mode proves to be advantageous for fuel consumption, exhaust gas emissions increase temporarily

  4. Hypoxia optimises tumour growth by controlling nutrient import and acidic metabolite export.

    Science.gov (United States)

    Parks, Scott K; Cormerais, Yann; Marchiq, Ibtissam; Pouyssegur, Jacques

    2016-01-01

    In their quest for survival and successful growth, cancer cells optimise their cellular processes to enable them to outcompete normal cells in their microenvironment. In essence cancer cells: (i) enhance uptake of nutrients/metabolites, (ii) utilise nutrients more efficiently via metabolic alterations and (iii) deal with the metabolic waste products in a way that furthers their progression while hampering the survival of normal tissue. Hypoxia Inducible Factors (HIFs) act as essential drivers of these adaptations via the promotion of numerous membrane proteins including glucose transporters (GLUTs), monocarboxylate transporters (MCTs), amino-acid transporters (LAT1, xCT), and acid-base regulating carbonic anhydrases (CAs). In addition to a competitive growth advantage for tumour cells, these HIF-regulated proteins are implicated in metastasis, cancer 'stemness' and the immune response. Current research indicates that combined targeting of these HIF-regulated membrane proteins in tumour cells will provide promising therapeutic strategies in the future.

  5. Platinum uptake from chloride solutions using biosorbents

    Directory of Open Access Journals (Sweden)

    Mehmet Hakan Morcali

    2013-04-01

    Full Text Available Present work investigates platinum uptake from synthetically prepared, dilute platinum-bearing solutions using biomass residues, i.e. pistachio nut shell and rice husk, which are abundant in Turkey, and provides a comparison between these two biosorbents. Effects of the different uptake parameters, sorbent dosage, contact time, temperature and pH of solution on platinum uptake (% were studied in detail on a batch sorption. Before the pistachio nut shell was activated, platinum uptake (% was poor compared to the rice husk. However, after the pistachio nut shell was activated at 1000 °C under an argon atmosphere, the platinum uptake (% increased two-fold. The pistachio nut shell (original and activated and rice husk were shown to be better than commercially available activated carbon in terms of adsorption capacity. These two sorbents have also been characterized by FTIR and SEM. Adsorption equilibrium data best complied with the Langmuir isotherm model. Maximum adsorption capacities, Qmax, at 25 °C were found to be 38.31 and 42.02 mg.g- 1for the activated pistachio nut shell and rice husk, respectively. Thermodynamic calculations using the measured ∆H°, ∆S° and ∆G° values indicate that the uptake process was spontaneous and endothermic. The experimental data were shown to be fit the pseudo-second-order kinetic model.

  6. Operational optimisation of water supply networks using a fuzzy ...

    African Journals Online (AJOL)

    Operational optimisation of water supply networks using a fuzzy system. ... This paper presents a fuzzy system to control the pressure in a water distribution network, by using valves and controlling the rotor speed of the ... Article Metrics.

  7. Exploring RSSI Dependency on Height in UHF for throughput optimisation

    CSIR Research Space (South Africa)

    Maliwatu, R

    2016-11-01

    Full Text Available International Conference on Advances in Computing & Communication Engineering (ICACCE), 28-29 November 2016, Durban, South Africa Exploring RSSI Dependency on Height in UHF for throughput optimisation Richard Maliwatu Albert Lysko David Johnson...

  8. Efficient topology optimisation of multiscale and multiphysics problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...... transfer problems. Part I begins with an introduction to the concept of microstructural details in the context of topology optimisation. Relevant literature is briefly reviewed and problems with existing methodologies are identified. The proposed methodology and its strengths are summarised. Details...... the computational cost of treating structures with fully-resolved microstructural details. The methodology is further applied to examples, where it is shown that it ensures connectivity of the microstructural details and that forced periodicity of the microstructural details can yield an implicit robustness to load...

  9. Share-of-Surplus Product Line Optimisation with Price Levels

    Directory of Open Access Journals (Sweden)

    X. G. Luo

    2014-01-01

    Full Text Available Kraus and Yano (2003 established the share-of-surplus product line optimisation model and developed a heuristic procedure for this nonlinear mixed-integer optimisation model. In their model, price of a product is defined as a continuous decision variable. However, because product line optimisation is a planning process in the early stage of product development, pricing decisions usually are not very precise. In this research, a nonlinear integer programming share-of-surplus product line optimization model that allows the selection of candidate price levels for products is established. The model is further transformed into an equivalent linear mixed-integer optimisation model by applying linearisation techniques. Experimental results in different market scenarios show that the computation time of the transformed model is much less than that of the original model.

  10. Optimisation of patient protection and image quality in diagnostic ...

    African Journals Online (AJOL)

    Optimisation of patient protection and image quality in diagnostic radiology. ... The study leads to the introduction of the concept of plan- do-check-act on QC results ... (QA) programme and continues to collect data for establishment of DRL's.

  11. A Comparison of Existing Optimisation Techniques with the ...

    African Journals Online (AJOL)

    ... Existing Optimisation Techniques with the Univariate Marginal Distribution Algorithm ... graph colouring, neural networks, genetic algorithms and tabu search. ... optimization techniques and we show how the proposed algorithm performs in ...

  12. Construction and optimisation of a cartridge filter for removing ...

    African Journals Online (AJOL)

    Construction and optimisation of a cartridge filter for removing fluoride in drinking water. ... It was found that the optimal conditions for the F- filter that gave the best results in removing of F- from water with minimum ... Article Metrics.

  13. optPBN: An Optimisation Toolbox for Probabilistic Boolean Networks

    Science.gov (United States)

    Trairatphisan, Panuwat; Mizera, Andrzej; Pang, Jun; Tantar, Alexandru Adrian; Sauter, Thomas

    2014-01-01

    Background There exist several computational tools which allow for the optimisation and inference of biological networks using a Boolean formalism. Nevertheless, the results from such tools yield only limited quantitative insights into the complexity of biological systems because of the inherited qualitative nature of Boolean networks. Results We introduce optPBN, a Matlab-based toolbox for the optimisation of probabilistic Boolean networks (PBN) which operates under the framework of the BN/PBN toolbox. optPBN offers an easy generation of probabilistic Boolean networks from rule-based Boolean model specification and it allows for flexible measurement data integration from multiple experiments. Subsequently, optPBN generates integrated optimisation problems which can be solved by various optimisers. In term of functionalities, optPBN allows for the construction of a probabilistic Boolean network from a given set of potential constitutive Boolean networks by optimising the selection probabilities for these networks so that the resulting PBN fits experimental data. Furthermore, the optPBN pipeline can also be operated on large-scale computational platforms to solve complex optimisation problems. Apart from exemplary case studies which we correctly inferred the original network, we also successfully applied optPBN to study a large-scale Boolean model of apoptosis where it allows identifying the inverse correlation between UVB irradiation, NFκB and Caspase 3 activations, and apoptosis in primary hepatocytes quantitatively. Also, the results from optPBN help elucidating the relevancy of crosstalk interactions in the apoptotic network. Summary The optPBN toolbox provides a simple yet comprehensive pipeline for integrated optimisation problem generation in the PBN formalism that can readily be solved by various optimisers on local or grid-based computational platforms. optPBN can be further applied to various biological studies such as the inference of gene regulatory

  14. A supportive architecture for CFD-based design optimisation

    Science.gov (United States)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  15. Optimisation as a process for understanding and managing river ecosystems

    OpenAIRE

    Barbour, EJ; Holz, L; G. Kuczera; Pollino, CA; Jakeman, AJ; Loucks, DP

    2016-01-01

    Optimisation can assist in the management of riverine ecosystems through the exploration of multiple alternative management strategies, and the evaluation of trade-offs between conflicting objectives. In addition, it can facilitate communication and learning about the system. However, the effectiveness of optimisation in aiding decision making for ecological management is currently limited by four major challenges: identification and quantification of ecosystem objectives; representation of e...

  16. Aerodynamic optimisation of an industrial axial fan blade

    OpenAIRE

    2006-01-01

    Numerical optimisation methods have successfully been used for a variety of aerodynamic design problems over quite a few years. However the application of these methods to the aerodynamic blade shape optimisation of industrial axial fans has received much less attention in the literature probably given the fact that the majority of resources available to develop these automated design approaches is to be found in the aerospace field. This work presents the develo...

  17. Spreadsheets In Function Of Optimisation Of Logistics Network

    OpenAIRE

    Drago Pupavac; Mimo Draskovic

    2007-01-01

    This scientific paper discusses how estimated spreadsheets functions in logistics networks optimisation. The suggested working hypothesis for efficacy of estimated spreadsheets in designing logistics networks is proved and a practical example. In this way the given model can be applied to all logistics networks of similar problem capacity. Logistics network model confronting estimated spreadsheets present a real world at a level needed for understanding the problem of optimisation of logistic...

  18. Optimised control of coal-fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    Owens, D.H.; MacConnell, P.F.A.; Neuffer, D.; Dando, R. [University of Exeter, Exeter (United Kingdom). Centre for System and Control Engineering

    1997-07-01

    The objective of the project is to develop and specify a control methodology that will enable existing coal combustion plant to take maximum advantage of modern control techniques. The research is specifically aimed at chain-grate stoker plant (such as the test facility at the Coal Research Establishment, Cheltenham) on which little work has been done for thirty years yet which still represents a large proportion of industrial coal-fired plant in operation worldwide. In detail, the project: reviewed existing control strategies for moving grate stokers, highlighting their limitations and areas for improvements; carried out plant trials to identify the system characteristics such as response time and input/output behaviour; developed a theoretical process based on physical and chemical laws and backed up by trial data; specified control strategies for a single boiler; simulated and evaluated the control strategies using model simulations; developed of an optimised. Control strategy for a single boiler; and assessed the applicability and effects of this control strategy on multiple boiler installations. 67 refs., 34 figs.

  19. Semantic Query Optimisation with Ontology Simulation

    CERN Document Server

    Gupta, Siddharth

    2010-01-01

    Semantic Web is, without a doubt, gaining momentum in both industry and academia. The word "Semantic" refers to "meaning" - a semantic web is a web of meaning. In this fast changing and result oriented practical world, gone are the days where an individual had to struggle for finding information on the Internet where knowledge management was the major issue. The semantic web has a vision of linking, integrating and analysing data from various data sources and forming a new information stream, hence a web of databases connected with each other and machines interacting with other machines to yield results which are user oriented and accurate. With the emergence of Semantic Web framework the na\\"ive approach of searching information on the syntactic web is clich\\'e. This paper proposes an optimised semantic searching of keywords exemplified by simulation an ontology of Indian universities with a proposed algorithm which ramifies the effective semantic retrieval of information which is easy to access and time sav...

  20. ENERGY OPTIMISATION SCHEMES FOR WIRELESS SENSOR NETWORK

    Directory of Open Access Journals (Sweden)

    Vivekanand Jha

    2012-05-01

    Full Text Available A sensor network is composed of a large number of sensor nodes, which are densely deployed either inside the phenomenon or very close to it. Sensor nodes have sensing, processing and transmitting capability . They however have limited energy and measures need to be taken to make op- timum usage of their energy and save them from task of only receiving and transmitting data without processing. Various techniques for energy utilization optimisation have been proposed Ma jor players are however clustering and relay node placement. In the research related to relay node placement, it has been proposed to deploy some relay nodes such that the sensors can transmit the sensed data to a nearby relay node, which in turn delivers the data to the base stations. In general, the relay node placement problems aim to meet certain connectivity and/or survivabil- ity requirements of the network by deploying a minimum number of relay nodes. The other approach is grouping sensor nodes into clusters with each cluster having a cluster head (CH. The CH nodes aggregate the data and transmit them to the base station (BS. These two approaches has been widely adopted by the research community to satisfy the scala- bility objective and generally achieve high energy efficiency and prolong network lifetime in large-scale WSN environments and hence are discussed here along with single hop and multi hop characteristic of sensor node.

  1. Topological optimisation of rod-stirring devices

    CERN Document Server

    Finn, Matthew D

    2011-01-01

    There are many industrial situations where rods are used to stir a fluid, or where rods repeatedly stretch a material such as bread dough or taffy. The goal in these applications is to stretch either material lines (in a fluid) or the material itself (for dough or taffy) as rapidly as possible. The growth rate of material lines is conveniently given by the topological entropy of the rod motion. We discuss the problem of optimising such rod devices from a topological viewpoint. We express rod motions in terms of generators of the braid group, and assign a cost based on the minimum number of generators needed to write the braid. We show that for one cost function -- the topological entropy per generator -- the optimal growth rate is the logarithm of the golden ratio. For a more realistic cost function,involving the topological entropy per operation where rods are allowed to move together, the optimal growth rate is the logarithm of the silver ratio, $1+\\sqrt{2}$. We show how to construct devices that realise th...

  2. Optimising preterm nutrition: present and future

    LENUS (Irish Health Repository)

    Brennan, Ann-Marie

    2016-04-01

    The goal of preterm nutrition in achieving growth and body composition approximating that of the fetus of the same postmenstrual age is difficult to achieve. Current nutrition recommendations depend largely on expert opinion, due to lack of evidence, and are primarily birth weight based, with no consideration given to gestational age and\\/or need for catch-up growth. Assessment of growth is based predominately on anthropometry, which gives insufficient attention to the quality of growth. The present paper provides a review of the current literature on the nutritional management and assessment of growth in preterm infants. It explores several approaches that may be required to optimise nutrient intakes in preterm infants, such as personalising nutritional support, collection of nutrient intake data in real-time, and measurement of body composition. In clinical practice, the response to inappropriate nutrient intakes is delayed as the effects of under- or overnutrition are not immediate, and there is limited nutritional feedback at the cot-side. The accurate and non-invasive measurement of infant body composition, assessed by means of air displacement plethysmography, has been shown to be useful in assessing quality of growth. The development and implementation of personalised, responsive nutritional management of preterm infants, utilising real-time nutrient intake data collection, with ongoing nutritional assessments that include measurement of body composition is required to help meet the individual needs of preterm infants.

  3. Design and optimisation of a pulsed CO2 laser for laser ultrasonics

    CSIR Research Space (South Africa)

    Forbes, A

    2006-02-01

    Full Text Available Slide 2 © CSIR 2006 www.csir.co.za Contents • Laser ultrasonics What is it, and how does it work? • Optimising the parameters Choices and consequences • Laser chemistry A physicists approach to chemistry • Discharge... fraction 0.2 0.4 0.6 0.8 1 e H n o i t c a r f Laser chemistry Impact of gas mix on laser chemistry Slide 10 © CSIR 2006 www.csir.co.za 36 kV Heat Exchanger 2 CO2 2 CO + O2 2 CO2 2 CO + O2 Problem statement Slide...

  4. Production of gluconic acid using Micrococcus sp.: optimisation of carbon and nitrogen sources.

    Science.gov (United States)

    Joshi, V D; Sreekantiah, K R; Manjrekar, S P

    1996-01-01

    A process for production of gluconic acid from glucose by a Micrococcus sp. is described. More than 400 bacterial cultures isolated from local soil were tested for gluconic acid production. Three isolates, were selected on basis of their ability to produce gluconic acid and high titrable acidity. These were identified as Micrococcus sp. and were named M 27, M 54 and M 81. Nutritional and other parameters for maximum production of gluconic acid by the selected isolates were optimised. It was found that Micrococcus sp. isolate M 27 gave highest yield of 8.19 g gluconic acid from 9 g glucose utilised giving 91% conversion effeciency.

  5. Optimising the anaerobic co-digestion of urban organic waste using dynamic bioconversion mathematical modelling

    DEFF Research Database (Denmark)

    Fitamo, Temesgen Mathewos; Boldrin, Alessio; Dorini, G.

    2016-01-01

    strategies for controlling and optimising the co-digestion process. The model parameters were maintained in the same way as the original dynamic bioconversion model, albeit with minor adjustments, to simulate the co-digestion of food and garden waste with mixed sludge from a wastewater treatment plant...... scenario analysis demonstrated that increasing the amount of mixed sludge in the co-substrate had a marginal effect on the reactor performance. In contrast, increasing the amount of food waste and garden waste resulted in improved performance....

  6. Source reconstruction using a bilevel optimisation method with a smooth weighted distance function

    CERN Document Server

    Brännström, Niklas

    2016-01-01

    We consider a bilevel optimatisation method for inverse linear atmospheric dispersion problems where both linear and non-linear model parameters are to be determined. We propose that a smooth weighted Mahalanobis distance function is used and derive sufficient conditions for when the follower problem has local strict convexity. A few toy-models are presented where local strict convexity and ill-posedness of the inverse problem are explored, indeed the smooth distance function is compared and contrasted to linear and piecewise linear ones. The bilevel optimisation method is then applied to sensor data collected in wind tunnel experiments of a neutral gas release in urban environments (MODITIC).

  7. Optimising production using the state-contingent approach versus the EV approach

    DEFF Research Database (Denmark)

    Rasmussen, Svend

    to analyse this question. Based on an artificially generated set of stochastic production data, parameters of both stochastic production functions and of state-contingent production functions are estimated. Using these estimated production functions, input decisions are afterwards optimised using three......It is not clear whether the state-contingent approach to decision making under risk and uncertainty has the potential of providing better decisions than the well-known EV model based on an estimated stochastic production function and variance measures. The paper uses Monte Carlo simulation...

  8. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    Science.gov (United States)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  9. Mitochondrial calcium uptake.

    Science.gov (United States)

    Williams, George S B; Boyman, Liron; Chikando, Aristide C; Khairallah, Ramzi J; Lederer, W J

    2013-06-25

    Calcium (Ca(2+)) uptake into the mitochondrial matrix is critically important to cellular function. As a regulator of matrix Ca(2+) levels, this flux influences energy production and can initiate cell death. If large, this flux could potentially alter intracellular Ca(2+) ([Ca(2+)]i) signals. Despite years of study, fundamental disagreements on the extent and speed of mitochondrial Ca(2+) uptake still exist. Here, we review and quantitatively analyze mitochondrial Ca(2+) uptake fluxes from different tissues and interpret the results with respect to the recently proposed mitochondrial Ca(2+) uniporter (MCU) candidate. This quantitative analysis yields four clear results: (i) under physiological conditions, Ca(2+) influx into the mitochondria via the MCU is small relative to other cytosolic Ca(2+) extrusion pathways; (ii) single MCU conductance is ∼6-7 pS (105 mM [Ca(2+)]), and MCU flux appears to be modulated by [Ca(2+)]i, suggesting Ca(2+) regulation of MCU open probability (P(O)); (iii) in the heart, two features are clear: the number of MCU channels per mitochondrion can be calculated, and MCU probability is low under normal conditions; and (iv) in skeletal muscle and liver cells, uptake per mitochondrion varies in magnitude but total uptake per cell still appears to be modest. Based on our analysis of available quantitative data, we conclude that although Ca(2+) critically regulates mitochondrial function, the mitochondria do not act as a significant dynamic buffer of cytosolic Ca(2+) under physiological conditions. Nevertheless, with prolonged (superphysiological) elevations of [Ca(2+)]i, mitochondrial Ca(2+) uptake can increase 10- to 1,000-fold and begin to shape [Ca(2+)]i dynamics.

  10. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  11. Methodological principles for optimising functional MRI experiments; Methodische Grundlagen der Optimierung funktioneller MR-Experimente

    Energy Technology Data Exchange (ETDEWEB)

    Wuestenberg, T. [Georg-August-Universitaet Goettingen, Abteilung fuer Medizinische Psychologie (Germany); Georg-August-Universitaet, Abteilung fuer Medizinische Psychologie, Goettingen (Germany); Giesel, F.L. [Deutsches Kebsforschungszentrum (DKFZ) Heidelberg, Abteilung fuer Radiologische Diagnostik (Germany); Strasburger, H. [Georg-August-Universitaet Goettingen, Abteilung fuer Medizinische Psychologie (Germany)

    2005-02-01

    Functional magnetic resonance imaging (fMRI) is one of the most common methods for localising neuronal activity in the brain. Even though the sensitivity of fMRI is comparatively low, the optimisation of certain experimental parameters allows obtaining reliable results. In this article, approaches for optimising the experimental design, imaging parameters and analytic strategies will be discussed. Clinical neuroscientists and interested physicians will receive practical rules of thumb for improving the efficiency of brain imaging experiments. (orig.) [German] Die funktionelle Magnetresonanztomographie (fMRT) des Zentralnervensystems ist eine der meistgenutzten Methoden zur Lokalisierung neuronaler Aktivitaet im Gehirn. Obwohl die Sensitivitaet der fMRT vergleichsweise gering ist, kann durch die Auswahl geeigneter experimenteller Parameter die Empfindlichkeit dieses bildgebenden Verfahrens gesteigert und die Reliabilitaet der Ergebnisse gewaehrleistet werden. In diesem Artikel werden deshalb Ansaetze fuer die Optimierung des Paradigmendesigns, der MR-Bildgebung und der Datenauswertung diskutiert. Klinischen Forschern und interessierten Aerzten sollen dadurch Richtgroessen fuer die Durchfuehrung effektiver fMRT-Experimente vermittelt werden. (orig.)

  12. Tapped density optimisation for four agricultural wastes - Part II: Performance analysis and Taguchi-Pareto

    Directory of Open Access Journals (Sweden)

    Ajibade Oluwaseyi Ayodele

    2016-01-01

    Full Text Available In this attempt, which is a second part of discussions on tapped density optimisation for four agricultural wastes (particles of coconut, periwinkle, palm kernel and egg shells, performance analysis for comparative basis is made. This paper pioneers a study direction in which optimisation of process variables are pursued using Taguchi method integrated with the Pareto 80-20 rule. Negative percentage improvements resulted when the optimal tapped density was compared with the average tapped density. However, the performance analysis between optimal tapped density and the peak tapped density values yielded positive percentage improvements for the four filler particles. The performance analysis results validate the effectiveness of using the Taguchi method in improving the tapped density properties of the filler particles. The application of the Pareto 80-20 rule to the table of parameters and levels produced revised tables of parameters and levels which helped to identify the factor-levels position of each parameter that is economical to optimality. The Pareto 80-20 rule also produced revised S/N response tables which were used to know the relevant S/N ratios that are relevant to optimality.

  13. Correlation between PET-CT 18FDG uptake in primary lesions and clinicopathological parameters in esophageal carcinoma patients%食管癌原发灶PET-CT氟代脱氧葡萄糖摄取及其与临床病理参数的相关性

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 李明焕; 孔莉; 石芳; 杨国仁; 于金明

    2009-01-01

    目的 从分子影像学角度,探讨治疗前食管癌原发灶PET-CT氟代脱氧葡萄糖(FDG)摄取与病变长度、肿瘤浸润深度、组织分化程度以及淋巴结转移状况的关系.方法 68例食管鳞癌患者术前行FDG PET-CT检查,测定最大标准摄取值(SUVmax),根据术后病理确定其病变长度、浸润深度、分化程度以及淋巴结转移情况.结果 68例食管癌患者肿瘤原发灶的SUVmax为10.7±5.3.不同浸润深度、分化程度及淋巴结转移情况的食管癌SUVmax差异均有统计学意义(均P<0.05).原发灶SUVmax与病变长度、浸润深度、分化程度及淋巴结转移情况均呈正相关(r=0.512,P=0.01;r=0.860,P=0.000;r=0.781,P=0.000;r=0.852,P=0.000).结论 食管癌原发灶SUVmax与病变长度、浸润深度、分化程度均呈正相关,发生淋巴结转移者原发灶的SUVmax高于无淋巴结转移者.%Objective To investigate the correlation between 18F-fluorodeoxyglucose (18FDG) uptake of primary lesions during PET-CT (positron emission tomography and computed tomography) examination and clinicopathological parameters such as the tumor length, depth of invasion, differentiation of the primary lesions and lymph node metastasis status in the patients with esophageal carcinoma. Methods From June 2004 to November 2006, 68 operable esophageal carcinoma patients were enrolled into this study, and all had a whole body 18FDG PET-CT scan before operation. The maximum standardized uptake value (SUVmax) of the primary lesions was measured. The tumor length, depth of invasion, differentiation of the primary lesions and lymph node metastasis status were determined by postoperative pathological examination. The correlation between the standardized uptake value (SUV) of primary lesions and the above mentioned clinicopathological parameters was analyzed. Results The overall length of primary lesion was positively correlated with SUVmax (r = 0. 512, P = 0.01 ). Depth of invasion was also positively

  14. 不同运动模式下耗氧量动力特征参数关系研究%Study on Characteristic Parameters of Oxygen Uptake Kinetics in Different Exercise Test Models

    Institute of Scientific and Technical Information of China (English)

    胡国鹏; 冯魏; 冯刚; 王振; 刘无逸; 孟妍

    2015-01-01

    Objective :The characters and relationship of two typical exercise test models were re‐searched in the study .Method :34 subject were recruited to finish two exercise test :one was incremental ramp exhaustive test ,other was moderate‐ intensity constant test .The data about oxygen uptake was fitted by linear and nonlinear function models .Results :The goodness R2 of fitting oxygen consumption (VO2 ) and power (Waat ) was 0 .932 ± 0 .151 ( P< 0 .01 ) by a linear function in the ramp test .The goodness R2 of fitting ventilation (VE ) with VO2 was 0 .977 ± 0 .012 ( P<0 .01 ) by nonlinear logarithmic function in the ramp test .In the moderate repetitived test ,goodness R2 of fit is 0 .96 ± 0 .02 ( P< 0 .01 ) ,where VO2 changed with time as exponential function .There were significant correlation ( P< 0 .01 ) between Delta Efficien‐cy ,maximal oxygen uptake and VT ,whose correlation coefficient was from 0 .600 to 0 .757 ;There was no significant correlation between OUES ,Delta Efficiency and τ .Conclusion :In ramp test ,VO2 was changed with Waat by typical linear features ,while there was typical loga‐rithmic function features between VE and VO2 ,there are significant differences before and af‐ter VT in the △ efficiency ;and in moderate‐intensity constant test ,VO2 showed a single ex‐ponential increase with time in response to the steady‐state mode ,there were different levels correlation between △ efficiency ,OUES and VT ,VO2 max .%目的:研究两种典型运动模型下耗氧动力学特征并分析其特征参数之间关系。方法:34名受试者分别进行一次斜坡式递增负荷力竭测试和3次中等强度下的动力学重复测试,用线性和非线性函数模型拟合耗氧量变化的特征参数。结果:斜坡式递增负荷模式中,耗氧量(VO2)和功率(Waat )的一次线性函数拟合优度 R2为0.932±0.151( P<0.01);而通气量(VE)和VO2的非线性对数函数拟合优度 R2为0

  15. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... type your comment or suggestion into the following text box: Comment: E-mail: Area code: Phone no: Thank you! Images × Image Gallery Photograph of a typical probe counter used for thyroid uptake exams. The patient sits with the camera directed at the neck for five minutes, and then the leg for ...

  16. The Uptake of GABA in Trypanosoma cruzi.

    Science.gov (United States)

    Galvez Rojas, Robert L; Ahn, Il-Young; Suárez Mantilla, Brian; Sant'Anna, Celso; Pral, Elizabeth Mieko Furusho; Silber, Ariel Mariano

    2015-01-01

    Gamma aminobutyric acid (GABA) is widely known as a neurotransmitter and signal transduction molecule found in vertebrates, plants, and some protozoan organisms. However, the presence of GABA and its role in trypanosomatids is unknown. Here, we report the presence of intracellular GABA and the biochemical characterization of its uptake in Trypanosoma cruzi, the etiological agent of Chagas' disease. Kinetic parameters indicated that GABA is taken up by a single transport system in pathogenic and nonpathogenic forms. Temperature dependence assays showed a profile similar to glutamate transport, but the effect of extracellular cations Na(+) , K(+) , and H(+) on GABA uptake differed, suggesting a different uptake mechanism. In contrast to reports for other amino acid transporters in T. cruzi, GABA uptake was Na(+) dependent and increased with pH, with a maximum activity at pH 8.5. The sensitivity to oligomycin showed that GABA uptake is dependent on ATP synthesis. These data point to a secondary active Na(+) /GABA symporter energized by Na(+) -exporting ATPase. Finally, we show that GABA occurs in the parasite's cytoplasm under normal culture conditions, indicating that it is regularly taken up from the culture medium or synthesized through an still undescribed metabolic pathway.

  17. Optimising location of unified power flow controllers by means of improved evolutionary programming

    Energy Technology Data Exchange (ETDEWEB)

    Hao, J.; Chen, C. [Shanghai Jiaotong Univ. (China). Dept. of Electrical Engineering; Shi, L.B. [Hong Kong Univ. (China). Dept. of Electrical and Electronic Engineering

    2004-11-01

    The unified power flow controller (UPFC) is one of the most promising Flexible AC Transmission Systems (FACTS) devices for the load flow control. Simultaneous optimisation of location and parameters for UPFCs is an important issue when the given number of UPFCs is applied to the power system with the purpose of increasing system loadability. This paper presents a mathematical model about optimal location and parameters of UPFCs to maximise the system loadability subject to the transmission line capacity limits and specified voltage level. An improved computational intelligence approach: self-adaptive evolutionary programming (SAEP) is used to solve the nonlinear programming problem presented above for better accuracy. Case studies of the IEEE 30- and 118-bus systems using the proposed model and technique demonstrate that the proposed mathematical model is corrective and efficient. Furthermore, steady-state performance of power system can be effectively enhanced due to the optimal location and parameters of UPFCs. (author)

  18. Determination of optimum thermal debinding and sintering process parameters using Taguchi Method

    CSIR Research Space (South Africa)

    Seerane, M

    2015-07-01

    Full Text Available from a green body after injection moulding; failure to completely remove the binder components results in distortion, cracking, blisters and contamination at elevated temperatures. This study focuses on optimising thermal debinding process parameters...

  19. Optimisation of computed radiography systems for chest imaging

    Energy Technology Data Exchange (ETDEWEB)

    Alzimami, K. [Department of Physics, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)], E-mail: k.alzimami@surrey.ac.uk; Sassi, S. [Royal Marsden NHS Foundation Trust, Sutton, Surrey SM2 5PT (United Kingdom); Alkhorayef, M. [Department of Physics, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom); Britten, A.J. [Department of Medical Physics, St George' s Hospital, London SW17 0QT (United Kingdom); Spyrou, N.M. [Department of Physics, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)

    2009-03-01

    The main thrust of this study is to investigate methods of optimising the radiation dose-image quality relationship in computed radiography (CR) systems for chest imaging. Specifically, this study investigates the possibility of reducing the patient radiation exposure through an optimal selection of tube filtration, exposure parameters and air gap technique, in parallel with a study of the image quality, particularly low contrast detail detectability, signal-to-noise ratio (SNR) and scatter fraction (SF). The CDRAD phantom was used to assess the quality of the CR images. Tissue equivalent Polystyrene blocks were placed in the front of the phantom as scattering material with thicknesses of 5 and 15 cm to simulate an adult chest and heart/diaphragm regions, respectively. A series of exposure techniques were used, including Cu filtration with various thicknesses of Cu in the presence and absence of an air gap, whilst the exposure was kept as constant as possible throughout. The estimated patient effective dose and skin entrance dose were calculated using the NRPB-SR262 X-ray dose calculation software. The results have shown that the low contrast-detail detectability in the lung and the heart/diaphragm regions improves when using an air gap and no Cu filtration, particularly at low kilovoltage (kVp). However, there is no significant difference in low contrast-detail in the absence or presence of a 0.2 mm Cu filtration. SF values for the lung and heart regions decrease when using both, the air gap technique and a 0.2 mm Cu filtration, particularly at low kVp. SNR values for the lung and heart regions improve when using a small Cu thickness. In conclusion, this investigation has shown that the quality of chest CR images could be improved by using an air gap technique and a 0.2 mm Cu filtration at low kVp, particularly at 99 kVp.

  20. CT dose optimisation and reduction in osteoarticular disease.

    Science.gov (United States)

    Gervaise, A; Teixeira, P; Villani, N; Lecocq, S; Louis, M; Blum, A

    2013-04-01

    With an improvement in the temporal and spatial resolution, computed tomography (CT) is indicated in the evaluation of a great many osteoarticular diseases. New exploration techniques such as the dynamic CT and CT bone perfusion also provide new indications. However, CT is still an irradiating imaging technique and dose optimisation and reduction remains primordial. In this paper, the authors first present the typical doses delivered during CT in osteoarticular disease. They then discuss the different ways to optimise and reduce these doses by distinguishing the behavioural factors from the technical factors. Among the latter, the optimisation of the milliamps and kilovoltage is indispensable and should be adapted to the type of exploration and the morphotype of each individual. These technical factors also benefit from recent technological evolutions with the distribution of iterative reconstructions. In this way, the dose may be divided by two and provide an image of equal quality. With these dose optimisation and reduction techniques, it is now possible, while maintaining an excellent quality of the image, to obtain low-dose or even very low-dose acquisitions with a dose sometimes similar that of a standard X-ray assessment. Nevertheless, although these technical factors provide a major reduction in the dose delivered, behavioural factors, such as compliance with the indications, remain fundamental. Finally, the authors describe how to optimise and reduce the dose with specific applications in musculoskeletal imaging such as the dynamic CT, CT bone perfusion and dual energy CT.

  1. Hybrid Genetic Algorithm with PSO Effect for Combinatorial Optimisation Problems

    Directory of Open Access Journals (Sweden)

    M. H. Mehta

    2012-12-01

    Full Text Available In engineering field, many problems are hard to solve in some definite interval of time. These problems known as “combinatorial optimisation problems” are of the category NP. These problems are easy to solve in some polynomial time when input size is small but as input size grows problems become toughest to solve in some definite interval of time. Long known conventional methods are not able to solve the problems and thus proper heuristics is necessary. Evolutionary algorithms based on behaviours of different animals and species have been invented and studied for this purpose. Genetic Algorithm is considered a powerful algorithm for solving combinatorial optimisation problems. Genetic algorithms work on these problems mimicking the human genetics. It follows principle of “survival of the fittest” kind of strategy. Particle swarm optimisation is a new evolutionary approach that copies behaviour of swarm in nature. However, neither traditional genetic algorithms nor particle swarm optimisation alone has been completely successful for solving combinatorial optimisation problems. Here a hybrid algorithm is proposed in which strengths of both algorithms are merged and performance of proposed algorithm is compared with simple genetic algorithm. Results show that proposed algorithm works definitely better than the simple genetic algorithm.

  2. Optimisation of battery operating life considering software tasks and their timing behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Lipskoch, Henrik

    2010-02-19

    Users of mobile embedded systems have an interest in long battery operating life. The longer a system can operate without need for recharge or battery replacement, the more will maintenance cost and the number of faults due to insufficient power supply decrease. Operating life is prolonged by saving energy, which may reduce available processing time. Mobile embedded systems communicating with other participants like other mobiles or radio stations are subject to time guarantees ensuring reliable communication. Thus, methods that save energy by reducing processing time are not only subject to available processing time but subject to the embedded system's time guarantees. To perform parameter optimisations offline, decisions can be taken early at design time, avoiding further computations at run-time. Especially, to compute processor shutdown durations offline, no extra circuitry to monitor system behaviour and to wake up the processor needs to be designed, deployed, or power supplied: only a timer is required. In this work, software tasks are considered sharing one processor. The scheduling algorithm earliest deadline first is assumed, and per-task, a relative deadline is assumed. Tasks may be instantiated arbitrarily as long as this occurrence behaviour is given in the notion of event streams. Scaling of the processor's voltage and processor shutdown are taken into account as methods for saving energy. With given per task worst-case execution times and the tasks' event streams, the real-time feasibility of the energy optimised solutions is proven. The decision which energy saving solution provides longest operating life is made with the help of a battery model. The used real-time feasibility test has the advantage that it can be approximated: this yields an adjustable number of linear optimisation constraints. Reducing the processor's voltage reduces processor frequency, therefore, execution times increase. The resulting slowdown becomes the

  3. Transmit Power Optimisation in Wireless Network

    Directory of Open Access Journals (Sweden)

    Besnik Terziu

    2011-09-01

    Full Text Available Transmit power optimisation in wireless networks based on beamforming have emerged as a promising technique to enhance the spectrum efficiency of present and future wireless communication systems. The aim of this study is to minimise the access point power consumption in cellular networks while maintaining a targeted quality of service (QoS for the mobile terminals. In this study, the targeted quality of service is delivered to a mobile station by providing a desired level of Signal to Interference and Noise Ratio (SINR. Base-stations are coordinated across multiple cells in a multi-antenna beamforming system. This study focuses on a multi-cell multi-antenna downlink scenario where each mobile user is equipped with a single antenna, but where multiple mobile users may be active simultaneously in each cell and are separated via spatial multiplexing using beamforming. The design criteria is to minimize the total weighted transmitted power across the base-stations subject to SINR constraints at the mobile users. The main contribution of this study is to define an iterative algorithm that is capable of finding the joint optimal beamformers for all basestations, based on a correlation-based channel model, the full-correlation model. Among all correlated channel models, the correlated channel model used in this study is the most accurate, giving the best performance in terms of power consumption. The environment here in this study is chosen to be Non-Light of- Sight (NLOS condition, where a signal from a wireless transmitter passes several obstructions before arriving at a wireless receiver. Moreover there are many scatterers local to the mobile, and multiple reflections can occur among them before energy arrives at the mobile. The proposed algorithm is based on uplink-downlink duality using the Lagrangian duality theory. Time-Division Duplex (TDD is chosen as the platform for this study since it has been adopted to the latest technologies in Fourth

  4. Multiobjective design optimisation of coronary stents.

    Science.gov (United States)

    Pant, Sanjay; Limbert, Georges; Curzen, Nick P; Bressloff, Neil W

    2011-11-01

    representative CYPHER stent are shown. The methodology and the results of this work could potentially be useful in further optimisation studies and development of a family of stents with increased resistance to in-stent restenosis and thrombosis.

  5. Image quality and dose optimisation for infant CT using a paediatric phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, Jack W.; Phelps, Andrew S.; Courtier, Jesse L.; Gould, Robert G.; MacKenzie, John D. [University of California, San Francisco, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)

    2016-05-15

    To optimise image quality and reduce radiation exposure for infant body CT imaging. An image quality CT phantom was created to model the infant body habitus. Image noise, spatial resolution, low contrast detectability and tube current modulation (TCM) were measured after adjusting CT protocol parameters. Reconstruction method (FBP, hybrid iterative and model-based iterative), image quality reference parameter, helical pitch and beam collimation were systematically investigated for their influence on image quality and radiation output. Both spatial and low contrast resolution were significantly improved with model-based iterative reconstruction (p < 0.05). A change in the helical pitch from 0.969 to 1.375 resulted in a 23 % reduction in total TCM, while a change in collimation from 20 to 40 mm resulted in a 46 % TCM reduction. Image noise and radiation output were both unaffected by changes in collimation, while an increase in pitch enabled a dose length product reduction of ∝6 % at equivalent noise. An optimised protocol with ∝30 % dose reduction was identified using model-based iterative reconstruction. CT technology continues to evolve and require protocol redesign. This work provides an example of how an infant-specific phantom is essential for leveraging this technology to maintain image quality while reducing radiation exposure. (orig.)

  6. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    Directory of Open Access Journals (Sweden)

    Amr Al Abed

    2013-01-01

    Full Text Available A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN, intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue.

  7. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    Science.gov (United States)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  8. Assessment and optimisation of the image quality of chest-radiography systems.

    Science.gov (United States)

    Redlich, U; Hoeschen, C; Doehring, W

    2005-01-01

    A complete evaluation strategy had been developed for thoracic X-ray imaging. It has been validated by investigating five chest-radiography systems, two of these systems after optimising image processing. The systems were a screen-film combination, a selenium drum, a conventional and a transparent imaging plate and a Cs/I-based flat panel detector (the two latter ones have been optimised using different post processing). At first all detectors have been characterised using physical parameters like DQE and MTF. After that all systems have been evaluated by human observer studies using anatomy in clinical images (VGA, ICS) and added pathological structures in thoracic phantom images (ROC). The ranking of the image quality of the systems was nearly the same in all studies. There was a similar assessment of main image quality parameters like spatial resolution, dynamic range and MTF. The modification of image post processing changed the visibility of pathological structures more than the visualisation of the anatomical criteria. The assessment of the clinical image quality has to be done for anatomical structures, and the recognition of pathological structures has to be evaluated.

  9. Adaptive impedance control of a hydraulic suspension system using particle swarm optimisation

    Science.gov (United States)

    Fateh, Mohammad Mehdi; Moradi Zirkohi, Majid

    2011-12-01

    This paper presents a novel active control approach for a hydraulic suspension system subject to road disturbances. A novel impedance model is used as a model reference in a particular robust adaptive control which is applied for the first time to the hydraulic suspension system. A scheme is introduced for selecting the impedance parameters. The impedance model prescribes a desired behaviour of the active suspension system in a wide range of different road conditions. Moreover, performance of the control system is improved by applying a particle swarm optimisation algorithm for optimising control design parameters. Design of the control system consists of two interior loops. The inner loop is a force control of the hydraulic actuator, while the outer loop is a robust model reference adaptive control (MRAC). This type of MRAC has been applied for uncertain linear systems. As another novelty, despite nonlinearity of the hydraulic actuator, the suspension system and the force loop together are presented as an uncertain linear system to the MRAC. The proposed control method is simulated on a quarter-car model. Simulation results show effectiveness of the method.

  10. Synthesis Optimisation of Lysozyme Monolayer-Coated Silver Nanoparticles in Aqueous Solution

    Directory of Open Access Journals (Sweden)

    A. V. Yakovlev

    2014-01-01

    Full Text Available This paper presents an optimisation of the synthesis of silver nanoparticles encapsulated in a biological shell. The synthesis was carried out in an aqueous solution of silver nitrate. Sodium borohydride was used as a reducing agent. Lysozyme served as a bioactive coating agent. The samples produced were studied using dynamic light scattering, transmission electron microscopy, and UV-Vis spectroscopy. The function of the dependence of the reagent ratio in obtained sols on optical properties is shown. Furthermore, the influence of the synthesis temperature, reactant ratio, and order of mixing on the particle size distribution parameters is shown. The optimal reagent mass ratio, NaBH4 : LYZ : AgNO3 = 0.22 : 0.77 : 1, is established. The resulting composition allows the synthesis of particles with a mean diameter of 18 nm and a bioshell thickness of ≈3.5 nm. Moreover, the necessity of the synthesis optimisation and precise parameter control is clearly demonstrated.

  11. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    Science.gov (United States)

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  12. Modelling Soil Water Retention Using Support Vector Machines with Genetic Algorithm Optimisation

    Directory of Open Access Journals (Sweden)

    Krzysztof Lamorski

    2014-01-01

    Full Text Available This work presents point pedotransfer function (PTF models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: –0.98, –3.10, –9.81, –31.02, –491.66, and –1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models’ parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67–0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  13. Minimisation of the sound power radiated by a submarine through optimisation of its resonance changer

    Science.gov (United States)

    Merz, Sascha; Kessissoglou, Nicole; Kinns, Roger; Marburg, Steffen

    2010-04-01

    An important cause of sound radiation from a submarine in the low frequency range is fluctuating forces at the propeller. The forces are transmitted to the hull via the shaft and the fluid. Sound radiation occurs due to hull and propeller vibrations as well as dipole sound radiation caused by the operation of the propeller in a non-uniform wake. In order to minimise sound radiation caused by propeller forces, a hydraulic vibration attenuation device known as a resonance changer can be implemented in the propeller/shafting system. In this work, cost functions that represent the overall radiated sound power are investigated, where the virtual stiffness, damping and mass of the resonance changer were chosen as design parameters. The minima of the cost functions are found by applying gradient based optimisation techniques. The finite element and boundary element methods are used to model the structure and the fluid, respectively. The adjoint operator is employed to calculate the sensitivity of the cost function to the design parameters. The influence of sound radiation due to propeller vibration on the optimisation of the resonance changer as well as the influence of the reduction in amplitude for higher harmonics of the blade-passing frequency on the control performance is investigated.

  14. Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions

    Science.gov (United States)

    Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin

    2017-03-01

    To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.

  15. Quasi-combinatorial energy landscapes for nanoalloy structure optimisation.

    Science.gov (United States)

    Schebarchov, D; Wales, D J

    2015-11-14

    We formulate nanoalloy structure prediction as a mixed-variable optimisation problem, where the homotops can be associated with an effective, quasi-combinatorial energy landscape in permutation space. We survey this effective landscape for a representative set of binary systems modelled by the Gupta potential. In segregating systems with small lattice mismatch, we find that homotops have a relatively straightforward landscape with few local optima - a scenario well-suited for local (combinatorial) optimisation techniques that scale quadratically with system size. Combining these techniques with multiple local-neighbourhood structures yields a search for multiminima, and we demonstrate that generalised basin-hopping with a metropolis acceptance criterion in the space of multiminima can then be effective for global optimisation of binary and ternary nanoalloys.

  16. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation

    Science.gov (United States)

    Zografos, K.; Oliveira, M. S. N.

    2016-01-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523

  17. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi......An increasing number of real-time applications are today implemented using distributed heterogeneous architectures composed of interconnected networks of processors. The systems are heterogeneous, not only in terms of hardware components, but also in terms of communication protocols and scheduling......-clusters, composed of several networks interconnected via gateways. They present a schedulability analysis for safety-critical applications distributed on multi-cluster systems and briefly highlight characteristic design optimisation problems: the partitioning and mapping of functionality, and the packing...

  18. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi......An increasing number of real-time applications are today implemented using distributed heterogeneous architectures composed of interconnected networks of processors. The systems are heterogeneous, not only in terms of hardware components, but also in terms of communication protocols and scheduling......-clusters, composed of several networks interconnected via gateways. They present a schedulability analysis for safety-critical applications distributed on multi-cluster systems and briefly highlight characteristic design optimisation problems: the partitioning and mapping of functionality, and the packing...

  19. Optimisation of an exemplar oculomotor model using multi-objective genetic algorithms executed on a GPU-CPU combination.

    Science.gov (United States)

    Avramidis, Eleftherios; Akman, Ozgur E

    2017-03-24

    Parameter optimisation is a critical step in the construction of computational biology models. In eye movement research, computational models are increasingly important to understanding the mechanistic basis of normal and abnormal behaviour. In this study, we considered an existing neurobiological model of fast eye movements (saccades), capable of generating realistic simulations of: (i) normal horizontal saccades; and (ii) infantile nystagmus - pathological ocular oscillations that can be subdivided into different waveform classes. By developing appropriate fitness functions, we optimised the model to existing experimental saccade and nystagmus data, using a well-established multi-objective genetic algorithm. This algorithm required the model to be numerically integrated for very large numbers of parameter combinations. To address this computational bottleneck, we implemented a master-slave parallelisation, in which the model integrations were distributed across the compute units of a GPU, under the control of a CPU. While previous nystagmus fitting has been based on reproducing qualitative waveform characteristics, our optimisation protocol enabled us to perform the first direct fits of a model to experimental recordings. The fits to normal eye movements showed that although saccades of different amplitudes can be accurately simulated by individual parameter sets, a single set capable of fitting all amplitudes simultaneously cannot be determined. The fits to nystagmus oscillations systematically identified the parameter regimes in which the model can reproduce a number of canonical nystagmus waveforms to a high accuracy, whilst also identifying some waveforms that the model cannot simulate. Using a GPU to perform the model integrations yielded a speedup of around 20 compared to a high-end CPU. The results of both optimisation problems enabled us to quantify the predictive capacity of the model, suggesting specific modifications that could expand its repertoire of

  20. Alternatives for optimisation of rumen fermentation in ruminants

    Directory of Open Access Journals (Sweden)

    T. Slavov

    2017-06-01

    Full Text Available Abstract. The proper knowledge on the variety of events occurring in the rumen makes possible their optimisation with respect to the complete feed conversion and increasing the productive performance of ruminants. The inclusion of various dietary additives (supplements, biologically active substances, nutritional antibiotics, probiotics, enzymatic preparations, plant extracts etc. has an effect on the intensity and specific pathway of fermentation, and thus, on the general digestion and systemic metabolism. The optimisation of rumen digestion is a method with substantial potential for improving the efficiency of ruminant husbandry, increasing of quality of their produce and health maintenance.

  1. Separative power of an optimised concurrent gas centrifuge

    Energy Technology Data Exchange (ETDEWEB)

    Bogovalov, Sergey; Boman, Vladimir [National Research Nuclear University (MEPHI), Moscow (Russian Federation)

    2016-06-15

    The problem of separation of isotopes in a concurrent gas centrifuge is solved analytically for an arbitrary binary mixture of isotopes. The separative power of the optimised concurrent gas centrifuges for the uranium isotopes equals to δU = 12.7 (V/700 m/s)2(300 K/T)(L/1 m) kg·SWU/yr, where L and V are the length and linear velocity of the rotor of the gas centrifuge and T is the temperature. This equation agrees well with the empirically determined separative power of optimised counter-current gas centrifuges.

  2. Separative Power of an Optimised Concurrent Gas Centrifuge

    Directory of Open Access Journals (Sweden)

    Sergey Bogovalov

    2016-06-01

    Full Text Available The problem of separation of isotopes in a concurrent gas centrifuge is solved analytically for an arbitrary binary mixture of isotopes. The separative power of the optimised concurrent gas centrifuges for the uranium isotopes equals to δU = 12.7 (V/700 m/s2(300 K/T(L/1 m kg·SWU/yr, where L and V are the length and linear velocity of the rotor of the gas centrifuge and T is the temperature. This equation agrees well with the empirically determined separative power of optimised counter-current gas centrifuges.

  3. Multiscale Analysis and Optimisation of Photosynthetic Solar Energy Systems

    CERN Document Server

    Ringsmuth, Andrew K

    2014-01-01

    This work asks how light harvesting in photosynthetic systems can be optimised for economically scalable, sustainable energy production. Hierarchy theory is introduced as a system-analysis and optimisation tool better able to handle multiscale, multiprocess complexities in photosynthetic energetics compared with standard linear-process analysis. Within this framework, new insights are given into relationships between composition, structure and energetics at the scale of the thylakoid membrane, and also into how components at different scales cooperate under functional objectives of the whole photosynthetic system. Combining these reductionistic and holistic analyses creates a platform for modelling multiscale-optimal, idealised photosynthetic systems in silico.

  4. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... candidate improved processes based on the fittest of the previous generation. The evaluation of the fitness of each candidate in a generation is performed via model checking, detailed in previous work. At each iteration, this allows the determination of the precise numerical evaluation of the performance...

  5. Plant-wide performance optimisation – The refrigeration system case

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Green, Torben; Razavi-Far, Roozbeh

    2012-01-01

    This paper investigates the problem of plant-wide performance optimisation seen from an industrial perspective. The refrigeration system is used as a case study, because it has a distributed control architecture and operates in steady state conditions, which is common for many industrial applicat......This paper investigates the problem of plant-wide performance optimisation seen from an industrial perspective. The refrigeration system is used as a case study, because it has a distributed control architecture and operates in steady state conditions, which is common for many industrial...

  6. Optimising a shaft's geometry by applying genetic algorithms

    Directory of Open Access Journals (Sweden)

    María Alejandra Guzmán

    2010-04-01

    Full Text Available Many engnieering design tasks involve optimising several conflicting goals; these types of problem are known as Multiobjective Optimisation Problems (MOPs. Evolutionary techniques have proved to be an effective tool for finding solutions to these MOPs during the last decade, Variations on the basic generic algorithm have been particulary proposed by different researchers for finding rapid optimal solutions to MOPs. The NSGA (Non-dominated Sorting Generic Algorithm has been implemented in this paper for finding an optimal design for a shaft subjected to cyclic loads, the conflycting goals being minimum weight and minimum lateral deflection.

  7. An Improved Lower Bound Limit State Optimisation Algorithm

    DEFF Research Database (Denmark)

    Frier, Christian; Damkilde, Lars

    2010-01-01

    Limit State analysis has been used in engineering practice for many years e.g. the yield-line method for concrete slabs and slip-line solutions in geotechnics. In the recent years there has been an increased interest in numerical Limit State analysis, and today algorithms take into account the non......-linear yield criteria. The aim of the paper is to refine an earlier presented effective method which reduces the number of optimisation variables considerably by eliminating the equilibrium equations a priori and improvements are made on the interior point optimisation algorithm....

  8. Robust eigenstructure clustering by non-smooth optimisation

    Science.gov (United States)

    Dao, Minh Ngoc; Noll, Dominikus; Apkarian, Pierre

    2015-08-01

    We extend classical eigenstructure assignment to more realistic problems, where additional performance and robustness specifications arise. Our aim is to combine time-domain constraints, as reflected by pole location and eigenvector structure, with frequency-domain objectives such as the H2, H∞ or Hankel norms. Using pole clustering, we allow poles to move in polydisks of prescribed size around their nominal values, driven by optimisation. Eigenelements, that is poles and eigenvectors, are allowed to move simultaneously and serve as decision variables in a specialised non-smooth optimisation technique. Two aerospace applications illustrate the power of the new method.

  9. Optimisation of Oil Production in Two – Phase Flow Reservoir Using Simultaneous Method and Interior Point Optimiser

    DEFF Research Database (Denmark)

    2012-01-01

    in the reservoir. A promising decrease of these remained resources can be provided by smart wells applying water injections to sustain satisfactory pressure level in the reservoir throughout the whole process of oil production. Basically to enhance secondary recovery of the remaining oil after drilling, water...... fields, or closed loop optimisation, can be used for optimising the reservoir performance in terms of net present value of oil recovery or another economic objective. In order to solve an optimal control problem we use a direct collocation method where we translate a continuous problem into a discrete...... for large scale nonlinear optimisation was applied. Because of its versatile compatibility with programming technologies, a C++ programming language in Microsoft Visual Studio integrated development environment was used for modelling the optimal control problem. Thanks to object oriented features...

  10. Solving dynamic multi-objective problems with vector evaluated particle swarm optimisation

    CSIR Research Space (South Africa)

    Greeff, M

    2008-06-01

    Full Text Available Many optimisation problems are multi-objective and change dynamically. Many methods use a weighted average approach to the multiple objectives. This paper introduces the usage of the vector evaluated particle swarm optimiser (VEPSO) to solve dynamic...

  11. Agent-Based Decision Control—How to Appreciate Multivariate Optimisation in Architecture

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas Holmer; Kolarik, Jakub

    2015-01-01

    in the early design stage. The main focus is to demonstrate the optimisation method, which is done in two ways. Firstly, the newly developed agent-based optimisation algorithm named Moth is tested on three different single objective search spaces. Here Moth is compared to two evolutionary algorithms. Secondly......, the method is applied to a multivariate optimisation problem. The aim is specifically to demonstrate optimisation for entire building energy consumption, daylight distribution and capital cost. Based on the demonstrations Moth’s ability to find local minima is discussed. It is concluded that agent-based...... optimisation algorithms like Moth open up for new uses of optimisation in the early design stage. With Moth the final outcome is less dependent on pre- and post-processing, and Moth allows user intervention during optimisation. Therefore, agent-based models for optimisation such as Moth can be a powerful...

  12. Production of biosolid fuels from municipal sewage sludge: Technical and economic optimisation.

    Science.gov (United States)

    Wzorek, Małgorzata; Tańczuk, Mariusz

    2015-08-01

    The article presents the technical and economic analysis of the production of fuels from municipal sewage sludge. The analysis involved the production of two types of fuel compositions: sewage sludge with sawdust (PBT fuel) and sewage sludge with meat and bone meal (PBM fuel). The technology of the production line of these sewage fuels was proposed and analysed. The main objective of the study is to find the optimal production capacity. The optimisation analysis was performed for the adopted technical and economic parameters under Polish conditions. The objective function was set as a maximum of the net present value index and the optimisation procedure was carried out for the fuel production line input capacity from 0.5 to 3 t h(-1), using the search step 0.5 t h(-1). On the basis of technical and economic assumptions, economic efficiency indexes of the investment were determined for the case of optimal line productivity. The results of the optimisation analysis show that under appropriate conditions, such as prices of components and prices of produced fuels, the production of fuels from sewage sludge can be profitable. In the case of PBT fuel, calculated economic indexes show the best profitability for the capacity of a plant over 1.5 t h(-1) output, while production of PBM fuel is beneficial for a plant with the maximum of searched capacities: 3.0 t h(-1). Sensitivity analyses carried out during the investigation show that influence of both technical and economic assessments on the location of maximum of objective function (net present value) is significant.

  13. Optimisation on pretreatment of rubber seed (Hevea brasiliensis) oil via esterification reaction in a hydrodynamic cavitation reactor.

    Science.gov (United States)

    Bokhari, Awais; Chuah, Lai Fatt; Yusup, Suzana; Klemeš, Jiří Jaromír; Kamil, Ruzaimah Nik M

    2016-01-01

    Pretreatment of the high free fatty acid rubber seed oil (RSO) via esterification reaction has been investigated by using a pilot scale hydrodynamic cavitation (HC) reactor. Four newly designed orifice plate geometries are studied. Cavities are induced by assisted double diaphragm pump in the range of 1-3.5 bar inlet pressure. An optimised plate with 21 holes of 1mm diameter and inlet pressure of 3 bar resulted in RSO acid value reduction from 72.36 to 2.64 mg KOH/g within 30 min of reaction time. Reaction parameters have been optimised by using response surface methodology and found as methanol to oil ratio of 6:1, catalyst concentration of 8 wt%, reaction time of 30 min and reaction temperature of 55°C. The reaction time and esterified efficiency of HC was three fold shorter and four fold higher than mechanical stirring. This makes the HC process more environmental friendly.

  14. HEPTopTagger optimisation studies in the context of a t anti t fully-hadronic resonance search

    Energy Technology Data Exchange (ETDEWEB)

    Sosa, David; Anders, Christoph; Kasieczka, Gregor; Schoening, Andre; Schaetzel, Sebastian [Physikalischens Institut, Heidelberg (Germany)

    2013-07-01

    The HEPTopTagger algorithm identifies boosted, hadronically decaying top quarks.It has been already validated using 2011 data taken with the ATLAS detector. The performance of the HEPTopTagger can be optimised by tuning internal parameters of the algorithm to improve the signal efficiency and the background rejection. Using the HEPTopTagger, a fully-hadronic resonance search has been conducted with the ATLAS detector with 2011 data. In order to improve the mass reach of the search the full 2012 data set can be used. The HepTopTagger is tested and re-optimized as the running conditions have changed. This optimisation of the HEPTopTagger on the context of a a fully-hadronic resonance search is presented.

  15. Non-linear Total Energy Optimisation of a Fleet of Power Plants

    Science.gov (United States)

    Nolle, Lars; Biegler-König, Friedrich; Deeskow, Peter

    In order to optimise the energy production in a fleet of power plants, it is necessary to solve a mixed integer optimisation problem. Traditionally, the continuous parts of the problem are linearized and a Simplex scheme is applied. Alternatively, heuristic "bionic" optimisation methods can be used without having to linearize the problem. Weare going to demonstrate this approach by modelling power plant blocks with fast Neural Networks and optimising the operation of multi-block power plants over one day with Simulated Annealing.

  16. Cost Optimisation of an Instrument Suite at an Accelerator-Driven Spallation Source

    CERN Document Server

    Bentley, P M

    2016-01-01

    This artcile presents an optimisation of performance and cost of neutron scattering instrumentation at the European Spallation Source. This is done by trading detailed cost functions against beam transmission functions in a multi-dimensional, yet simple, parameter space. On the one hand, the neutron guide cost increases as a power of the desired beam divergence, and inversely with the minimum wavelength, due to the supermirror coating needed. On the other hand, the more neutrons are transported to the instrument the greater are the shielding costs to deal with the gamma rays that result from the eventual absorption of the neutrons. There are additional factors in that many of the parameters defining the neutron guide geometry are continuous variables, and the straightness of the guide increases the transmission of high energy spallation products, which affect the specifications of particularly heavy hardware, such as heavy shutters and additional shielding, beam stops etc. Over the suite of 16 instruments, a ...

  17. Optimisation of optical properties of a long-wavelength GaInNAs quantum-well laser diode

    Energy Technology Data Exchange (ETDEWEB)

    Alias, M S; Maskuriy, F; Faiz, F; Mitani, S M [Advanced Physical Technologies Laboratory, Telekom Malaysia Research and Development (TMR and D), Lingkaran Teknokrat Timur, 63000 Cyberjaya, Selangor (Malaysia); AL-Omari, A N [Electronic Engineering Department, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid 21163 (Jordan)

    2013-11-30

    We report optimisation of optical properties of a strained GaInNAs/GaAs quantum-well laser, by taking into account the many-body effect theory and the bowing parameter. The theoretical transition energies and the GaInNAs bowing parameter are fitted into the photoluminescence spectrum of the GaInNAs quantum well, obtained in the experiment. The theoretical results for the photoluminescence spectrum and laser characteristics (light, current and voltage) exhibits a high degree of agreement with the experimental results. (lasers)

  18. Optimisation of a novel trailing edge concept for a high lift device

    CSIR Research Space (South Africa)

    Botha, JDM

    2014-09-01

    Full Text Available A novel concept (referred to as the flap extension) is implemented on the leading edge of the flap of a three element high lift device. The concept is optimised using two optimisation approaches based on Genetic Algorithm optimisations. A zero order...

  19. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    possible solution method. By the aid of analytical methods, the problem can not be solved, so a genetic algorithm based heuristic optimisation method was chosen to find the optimal solution. The input parameters of the optimisation are the followings: specific fixed, unit and environmental risk costs of the collection points of the inverse supply chain, specific warehousing and transportation costs and environmental risk costs of transportation. The output parameters are the followings: the number of objects in the different hierarchical levels of the collection system, infrastructure costs, logistics costs and environmental risk costs from used infrastructures, transportation and number of products recycled out of time. The next step of the research work was the application of the above mentioned method. The developed application makes it possible to define the input parameters of the real system, the graphical view of the chosen optimal solution in the case of the given input parameters, graphical view of the cost structure of the optimal solution, determination of the parameters of the algorithm (e.g. number of individuals, operators and termination conditions). The sensibility analysis of the objective function and the test results showed that the structure of the inverse supply chain depends on the proportion of the specific costs. Especially the proportion of the specific environmental risk costs influences the structure of the system and the number of objects at each hierarchical level of the collection system. The sensitivity analysis of the total cost function was performed in three cases. In the first case the effect of the proportion of specific infrastructure and logistics costs were analysed. If the infrastructure costs are significantly lower than the total costs of warehousing and transportation, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of

  20. Exosomes: Mechanisms of Uptake

    Directory of Open Access Journals (Sweden)

    Kelly J. McKelvey

    2015-07-01

    Full Text Available Exosomes are 30–100 nm microvesicles which contain complex cellular signals of RNA, protein and lipids. Because of this, exosomes are implicated as having limitless therapeutic potential for the treatment of cancer, pregnancy complications, infections, and autoimmune diseases. To date we know a considerable amount about exosome biogenesis and secretion, but there is a paucity of data regarding the uptake of exosomes by immune and non- immune cell types (e.g., cancer cells and the internal signalling pathways by which these exosomes elicit a cellular response. Answering these questions is of para‐ mount importance.

  1. Exosomes: Mechanisms of Uptake

    Directory of Open Access Journals (Sweden)

    Kelly J. McKelvey

    2015-07-01

    Full Text Available Exosomes are 30–100 nm microvesicles which contain complex cellular signals of RNA, protein and lipids. Because of this, exosomes are implicated as having limitless therapeutic potential for the treatment of cancer, pregnancy complications, infections, and autoimmune diseases. To date we know a considerable amount about exosome biogenesis and secretion, but there is a paucity of data regarding the uptake of exosomes by immune and non-immune cell types (e.g., cancer cells and the internal signalling pathways by which these exosomes elicit a cellular response. Answering these questions is of paramount importance.

  2. Mobile app and app store analysis, testing and optimisation

    OpenAIRE

    Harman, M.; Al-Subaihin, A.; Jia, Y.; Martin, W.; Sarro, F.; Zhang, Y.

    2016-01-01

    This talk presents results on analysis and testing of mobile apps and app stores, reviewing the work of the UCL App Analysis Group (UCLappA) on App Store Mining and Analysis. The talk also covers the work of the UCL CREST centre on Genetic Improvement, applicable to app improvement and optimisation.

  3. Plant-wide performance optimisation – The refrigeration system case

    DEFF Research Database (Denmark)

    Green, Torben; Razavi-Far, Roozbeh; Izadi-Zamanabadi, Roozbeh;

    2012-01-01

    This paper investigates the problem of plant-wide performance optimisation seen from an industrial perspective. The refrigeration system is used as a case study, because it has a distributed control architecture and operates in steady state conditions, which is common for many industrial applicat...

  4. Evaluation of a high throughput starch analysis optimised for wood.

    Directory of Open Access Journals (Sweden)

    Chandra Bellasio

    Full Text Available Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11 was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood of four species (coniferous and flowering plants. The optimised protocol proved to be remarkably precise and accurate (3%, suitable for a high throughput routine analysis (35 samples a day of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes.

  5. Evaluation of a high throughput starch analysis optimised for wood.

    Science.gov (United States)

    Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco

    2014-01-01

    Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes.

  6. Evaluation of a High Throughput Starch Analysis Optimised for Wood

    Science.gov (United States)

    Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco

    2014-01-01

    Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863

  7. Cluster Optimisation using Cgroups at a Tier-2

    Science.gov (United States)

    Qin, G.; Roy, G.; Crooks, D.; Skipsey, S. C.; Stewart, G. P.; Britton, D.

    2016-10-01

    The Linux kernel feature Control Groups (cgroups) has been used to gather metrics on the resource usage of single and eight-core ATLAS workloads. It has been used to study the effects on performance of a reduction in the amount of physical memory. The results were used to optimise cluster performance, and consequently increase cluster throughput by up to 10%.

  8. A comparative study of marriage in honey bees optimisation (MBO ...

    African Journals Online (AJOL)

    2012-02-15

    Feb 15, 2012 ... complicate water management decision-making. ... evolutionary algorithms, such as the genetic algorithm (GA), ant colony optimisation for continuous ... biological properties. ... and proposed a new algorithm,called the 'artificial bee colony' ... as a set of transitions in a state–space (the environment), where.

  9. Smart optimisation and sensitivity analysis in water distribution systems

    CSIR Research Space (South Africa)

    Page, Philip R

    2015-12-01

    Full Text Available stream_source_info Page_16028_2015.pdf.txt stream_content_type text/plain stream_size 1806 Content-Encoding ISO-8859-1 stream_name Page_16028_2015.pdf.txt Content-Type text/plain; charset=ISO-8859-1 SMART OPTIMISATION...

  10. Computational optimisation of targeted DNA sequencing for cancer detection

    DEFF Research Database (Denmark)

    Martinez, Pierre; McGranahan, Nicholas; Birkbak, Nicolai Juul

    2013-01-01

    circulating tumour DNA (ctDNA) might represent a non-invasive method to detect mutations in patients, facilitating early detection. In this article, we define reduced gene panels from publicly available datasets as a first step to assess and optimise the potential of targeted ctDNA scans for early tumour...

  11. Application of Surpac and Whittle Software in Open Pit Optimisation ...

    African Journals Online (AJOL)

    Michael

    2015-06-01

    Jun 1, 2015 ... Optimisation and Design”, Ghana Mining Journal, Vol. 15, No. 1, pp. 35 - 43. Abstract ... reduction in metal price in the market. The work involved in open .... also aided in viewing the model in graphics. A constraint is a logical ...

  12. Deterministic and robust optimisation strategies for metal forming proceesses

    NARCIS (Netherlands)

    Bonte, M.H.A.; Boogaard, van den A.H.; Huetink, J.

    2007-01-01

    Product improvement and cost reduction have always been important goals in the metal forming industry. The rise of Finite Element simulations for metal forming processes has contributed to these goals in a major way. More recently, coupling FEM simulations to mathematical optimisation techniques has

  13. A Bayesian Approach for Sensor Optimisation in Impact Identification

    Directory of Open Access Journals (Sweden)

    Vincenzo Mallardo

    2016-11-01

    Full Text Available This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence.

  14. Optimising a fall out dust monitoring sampling programme at a ...

    African Journals Online (AJOL)

    GREG

    The aim of this study at the specific cement manufacturing plant and open cast mine was ... Key words: Fall out dust monitoring, cement plant, optimising, air pollution sampling, ..... meters as this is in line with the height of a typical fall out dust.

  15. Optimisation of searches for Supersymmetry with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Zvolsky, Milan

    2012-01-15

    The ATLAS experiment is one of the four large experiments at the Large Hadron Collider which is specifically designed to search for the Higgs boson and physics beyond the Standard Model. The aim of this thesis is the optimisation of searches for Supersymmetry in decays with two leptons and missing transverse energy in the final state. Two different optimisation studies have been performed for two important analysis aspects: The final signal region selection and the choice of the trigger selection. In the first part of the analysis, a cut-based optimisation of signal regions is performed, maximising the signal for a minimal background contamination. By this, the signal yield can in parts be more than doubled. The second approach is to introduce di-lepton triggers which allow to lower the lepton transverse momentum threshold, thus enhancing the number of selected signal events significantly. The signal region optimisation was considered for the choice of the final event selection in the ATLAS di-lepton analyses. The trigger study contributed to the incorporation of di-lepton triggers to the ATLAS trigger menu. (orig.)

  16. Optimised prefactored compact schemes for linear wave propagation phenomena

    Science.gov (United States)

    Rona, A.; Spisso, I.; Hall, E.; Bernardini, M.; Pirozzoli, S.

    2017-01-01

    A family of space- and time-optimised prefactored compact schemes are developed that minimise the computational cost for given levels of numerical error in wave propagation phenomena, with special reference to aerodynamic sound. This work extends the approach of Pirozzoli [1] to the MacCormack type prefactored compact high-order schemes developed by Hixon [2], in which their shorter Padé stencil from the prefactorisation leads to a simpler enforcement of numerical boundary conditions. An explicit low-storage multi-step Runge-Kutta integration advances the states in time. Theoretical predictions for spatial and temporal error bounds are derived for the cost-optimised schemes and compared against benchmark schemes of current use in computational aeroacoustic applications in terms of computational cost for a given relative numerical error value. One- and two-dimensional test cases are presented to examine the effectiveness of the cost-optimised schemes for practical flow computations. An effectiveness up to about 50% higher than the standard schemes is verified for the linear one-dimensional advection solver, which is a popular baseline solver kernel for computational physics problems. A substantial error reduction for a given cost is also obtained in the more complex case of a two-dimensional acoustic pulse propagation, provided the optimised schemes are made to operate close to their nominal design points.

  17. Using break quantities for tactical optimisation in multistage distribution systems

    NARCIS (Netherlands)

    M.J. Kleijn (Marcel); R. Dekker (Rommert)

    1997-01-01

    textabstractIn this chapter we discuss a tactical optimisation problem that arises in a multistage distribution system where customer orders can be delivered from any stockpoint. A simple rule to allocate orders to locations is a break quantity rule, which routes large orders to higher-stage stockpo

  18. Multi-disciplinary design optimisation via dashboard portals

    NARCIS (Netherlands)

    Uijtenhaak, T.; Coenders, J.L.

    2012-01-01

    This paper will present the opportunities a dashboard-based system provides to gather information on alternatives and display this in such a way that it will help the user making decisions by making use of multi-disciplinary optimisation (MDO) technology. The multi-disciplinary set up of the program

  19. A metamodel based optimisation algorithm for metal forming processes

    NARCIS (Netherlands)

    Bonte, M.H.A.; Boogaard, van den A.H.; Huetink, J.; Banabic, Dorel

    2007-01-01

    Cost saving and product improvement have always been important goals in the metal forming industry. To achieve these goals, metal forming processes need to be optimised. During the last decades, simulation software based on the Finite Element Method (FEM) has significantly contributed to designing f

  20. Optimised cantilever biosensor with piezoresistive read-out

    DEFF Research Database (Denmark)

    Rasmussen, Peter; Thaysen, J.; Hansen, Ole

    2003-01-01

    We present a cantilever-based biochemical sensor with piezoresistive read-out which has been optimised for measuring surface stress. The resistors and the electrical wiring on the chip are encapsulated in low-pressure chemical vapor deposition (LPCVD) silicon nitride, so that the chip is well sui...

  1. Optimisation of selective breeding program for Nile tilapia (Oreochromis niloticus)

    NARCIS (Netherlands)

    Trong, T.Q.

    2013-01-01

      The aim of this thesis was to optimise the selective breeding program for Nile tilapia in the Mekong Delta region of Vietnam. Two breeding schemes, the “classic” BLUP scheme following the GIFT method (with pair mating) and a rotational mating scheme with own performance selection

  2. SINGLE FIXED CRANE OPTIMISATION WITHIN A DISTRIBUTION CENTRE

    Directory of Open Access Journals (Sweden)

    J. Matthews

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper considersthe optimisation of the movement of a fixed crane operating in a single aisle of a distribution centre. The crane must move pallets in inventory between docking bays, storage locations, and picking lines. Both a static and a dynamic approach to the problem are presented. The optimisation is performed by means of tabu search, ant colony metaheuristics,and hybrids of these two methods. All these solution approaches were tested on real life data obtained from an operational distribution centre. Results indicate that the hybrid methods outperform the other approaches.

    AFRIKAANSE OPSOMMING: Die optimisering van die beweging van 'n vaste hyskraan in 'n enkele gang van 'n distribusiesentrum word in hierdie artikel beskou. Die hyskraan moet pallette vervoer tussen dokhokke, stoorposisies, en opmaaklyne. Beide 'n statiese en 'n dinamiese benadering tot die probleem word aangebied. Die optimisering word gedoen met behulp van tabu-soektogte, mierkolonieoptimisering,en hibriede van hierdie twee metodes. Al die oplossingsbenaderings is getoets met werklike data wat van 'n operasionele distribusiesentrum verkry is. Die resultate toon aan dat die hibriedmetodes die beste oplossings lewer.

  3. Multi-disciplinary design optimisation via dashboard portals

    NARCIS (Netherlands)

    Uijtenhaak, T.; Coenders, J.L.

    2012-01-01

    This paper will present the opportunities a dashboard-based system provides to gather information on alternatives and display this in such a way that it will help the user making decisions by making use of multi-disciplinary optimisation (MDO) technology. The multi-disciplinary set up of the program

  4. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available Congress on Evolutionary Computation, 20-23 June 2013, Cancún, México Analysing the Performance of Dynamic Multi-objective Optimisation Algorithms Marde Helbig CSIR: Meraka Institute, Brummeria, South Africa; and University of Pretoria Computer...

  5. Self-organising sensor web using cell-fate optimisation

    CSIR Research Space (South Africa)

    Van Zyl, TL

    2009-07-01

    Full Text Available may be doing so both dynamically and stochastically. When presented by a dynamic and stochastic changing environment, such as a sensor resource unexpectedly going down, a self-adaptive system should exhibit robustness. Cell-fate optimisation and signal...

  6. Calculating NMR parameters in aluminophosphates: evaluation of dispersion correction schemes.

    Science.gov (United States)

    Sneddon, Scott; Dawson, Daniel M; Pickard, Chris J; Ashbrook, Sharon E

    2014-02-14

    Periodic density functional theory (DFT) calculations have recently emerged as a popular tool for assigning solid-state nuclear magnetic resonance (NMR) spectra. However, in order for the calculations to yield accurate results, accurate structural models are also required. In many cases the structural model (often derived from crystallographic diffraction) must be optimised (i.e., to an energy minimum) using DFT prior to the calculation of NMR parameters. However, DFT does not reproduce weak long-range "dispersion" interactions well, and optimisation using some functionals can expand the crystallographic unit cell, particularly when dispersion interactions are important in defining the structure. Recently, dispersion-corrected DFT (DFT-D) has been extended to periodic calculations, to compensate for these missing interactions. Here, we investigate whether dispersion corrections are important for aluminophosphate zeolites (AlPOs) by comparing the structures optimised by DFT and DFT-D (using the PBE functional). For as-made AlPOs (containing cationic structure-directing agents (SDAs) and framework-bound anions) dispersion interactions appear to be important, with significant changes between the DFT and DFT-D unit cells. However, for calcined AlPOs, where the SDA-anion pairs are removed, dispersion interactions appear much less important, and the DFT and DFT-D unit cells are similar. We show that, while the different optimisation strategies yield similar calculated NMR parameters (providing that the atomic positions are optimised), the DFT-D optimisations provide structures in better agreement with the experimental diffraction measurements. Therefore, it appears that DFT-D calculations can, and should, be used for the optimisation of calcined and as-made AlPOs, in order to provide the closest agreement with all experimental measurements.

  7. Optimisation of n-octyl oleate enzymatic synthesis over Rhizomucor miehei lipase.

    Science.gov (United States)

    Laudani, Chiara Giulia; Habulin, Maja; Primozic, Mateja; Knez, Zeljko; Della Porta, Giovanna; Reverchon, Ernesto

    2006-07-01

    Octyl oleate is a useful organic compound with several applications in cosmetic, lubricant and pharmaceutical industry. At first, the enzymatic synthesis of n-octyl oleate by direct lipase-catalysed esterification of oleic acid and 1-octanol was investigated in a stirred batch reactor in solvent-free system. A systematic screening and optimisation of the reaction parameters were performed to gain insight into the kinetics mechanism. Particularly, enzyme concentration, reaction temperature, stirrer speed, water content, substrates concentration and molar ratio were optimised with respect to the final product concentration and reaction rate. The kinetics mechanism of the reaction was investigated. Finally, a comparison of the experimental results obtained in a solvent free-system with those using two different solvents, supercritical carbon dioxide (SC-CO2) and n-hexane, was proposed. It resulted that in SC-CO2 higher concentration of the desired product was attained, requiring lower enzyme concentrations to achieve comparable conversion of free fatty acid into fatty acid ester.

  8. Optimising electron holography in the presence of partial coherence and instrument instabilities

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Shery L.Y., E-mail: shery.chang@fz-juelich.de; Dwyer, Christian, E-mail: c.dwyer@fz-juelich.de; Boothroyd, Chris B.; Dunin-Borkowski, Rafal E.

    2015-04-15

    Off-axis electron holography provides a direct means of retrieving the phase of the wavefield in a transmission electron microscope, enabling measurement of electric and magnetic fields at length scales from microns to nanometers. To maximise the accuracy of the technique, it is important to acquire holograms using experimental conditions that optimise the phase resolution for a given spatial resolution. These conditions are determined by a number of competing parameters, especially the spatial coherence and the instrument instabilities. Here, we describe a simple, yet accurate, model for predicting the dose rate and exposure time that give the best phase resolution in a single hologram. Experimental studies were undertaken to verify the model of spatial coherence and instrument instabilities that are required for the optimisation. The model is applicable to electron holography in both standard mode and Lorentz mode, and it is relatively simple to apply. - Highlights: • We describe a simple, yet accurate, model for predicting the best phase resolution in off-axis electron holography. • Calibration of the model requires only two series of blank holograms; an intensity sequence and a time sequence. • The model can predict the optimum dose rate and exposure time for any given combination of biprism voltage and magnification. • The model is applicable in both standard mode and Lorentz mode, using either round or elliptical illumination.

  9. Self-optimising control for a class of continuous bioreactor via variable-structure feedback

    Science.gov (United States)

    Lara-Cisneros, Gerardo; Alvarez-Ramírez, José; Femat, Ricardo

    2016-04-01

    A self-optimising controller is designed for stabilisation of a class of bioreactor exploiting sliding-mode techniques. The stability analysis for the class of bioreactor, in open-loop configuration, suggests that the optimal behaviour, respect to maximal biomass production, occurs in an unstable region (structurally unstable). In this contribution, a variable-structure controller is designed, exploiting the inhibitory effect of substrate concentration under the biomass growth rate, such that the closed-loop system reaches the optimal manifold where the effect induced by the growth rate gradient is compensated (favouring the maximum growth rate). The self-optimising comprises an uncertainty estimator which computes the unknown terms for increasing the robustness issues of the sliding-mode scheme. Numerical experiments illustrate the performance and execution of the control strategy considering different parameter values for biomass growth rate. The robustness and fragility of the proposed controller are also discussed with respect to the modelling uncertainty and small changes in the controller gains, respectively.

  10. A novel automated bioreactor for scalable process optimisation of haematopoietic stem cell culture.

    Science.gov (United States)

    Ratcliffe, E; Glen, K E; Workman, V L; Stacey, A J; Thomas, R J

    2012-10-31

    Proliferation and differentiation of haematopoietic stem cells (HSCs) from umbilical cord blood at large scale will potentially underpin production of a number of therapeutic cellular products in development, including erythrocytes and platelets. However, to achieve production processes that are scalable and optimised for cost and quality, scaled down development platforms that can define process parameter tolerances and consequent manufacturing controls are essential. We have demonstrated the potential of a new, automated, 24×15 mL replicate suspension bioreactor system, with online monitoring and control, to develop an HSC proliferation and differentiation process for erythroid committed cells (CD71(+), CD235a(+)). Cell proliferation was relatively robust to cell density and oxygen levels and reached up to 6 population doublings over 10 days. The maximum suspension culture density for a 48 h total media exchange protocol was established to be in the order of 10(7)cells/mL. This system will be valuable for the further HSC suspension culture cost reduction and optimisation necessary before the application of conventional stirred tank technology to scaled manufacture of HSC derived products.

  11. Analysis and optimisation of the convergence behaviour of the single channel digital tanlock loop

    Science.gov (United States)

    Al-Kharji Al-Ali, Omar; Anani, Nader; Al-Araji, Saleh; Al-Qutayri, Mahmoud

    2013-09-01

    The mathematical analysis of the convergence behaviour of the first-order single channel digital tanlock loop (SC-DTL) is presented. This article also describes a novel technique that allows controlling the convergence speed of the loop, i.e. the time taken by the phase-error to reach its steady-state value, by using a specialised controller unit. The controller is used to adjust the convergence speed so as to selectively optimise a given performance parameter of the loop. For instance, the controller may be used to speed up the convergence in order to increase the lock range and improve the acquisition speed. However, since increasing the lock range can degrade the noise immunity of the system, in a noisy environment the controller can slow down the convergence speed until locking is achieved. Once the system is in lock, the convergence speed can be increased to improve the acquisition speed. The performance of the SC-DTL system was assessed against similar arctan-based loops and the results demonstrate the success of the controller in optimising the performance of the SC-DTL loop. The results of the system testing using MATLAB/Simulink simulation are presented. A prototype of the proposed system was implemented using a field programmable gate array module and the practical results are in good agreement with those obtained by simulation.

  12. Induction Heating Process: 3D Modeling and Optimisation

    Science.gov (United States)

    Naar, R.; Bay, F.

    2011-05-01

    An increasing number of problems in mechanics and physics involves multiphysics coupled problems. Among these problems, we can often find electromagnetic coupled problems. Electromagnetic couplings may be involved through the use of direct or induced currents for thermal purposes—in order to generate heat inside a work piece in order to get either a prescribed temperature field or some given mechanical or metallurgical properties through an accurate control of temperature evolution with respect to time-, or for solid or fluid mechanics purposes—in order to create magnetic forces such as in fluid mechanics (electromagnetic stirring,…) or solid mechanics (magnetoforming,…). Induction heat treatment processes is therefore quite difficult to control; trying for instance to minimize distortions generated by such a process is not easy. In order to achieve these objectives, we have developed a computational tool which includes an optimsation stage. A 3D finite element modeling tool for local quenching after induction heating processes has already been developed in our laboratory. The modeling of such a multiphysics coupled process needs taking into account electromagnetic, thermal, mechanical and metallurgical phenomenon—as well as their mutual interactions during the whole process: heating and quenching. The model developed is based on Maxwell equations, heat transfer equation, mechanical equilibrium computations, Johnson-Mehl-Avrami and Koistinen-Marburger laws. All these equations and laws may be coupled but some coupling may be neglected. In our study, we will also focus on induction heating process aiming at optimising the Heat Affected Zone (HAZ). Thus problem is formalized as an optimization problem—minimizing a cost function which measures the difference between computed and optimal temperatures—along with some constraints on process parameters. The optimization algorithms may be of two kinds—either zero-order or first-order algorithms. First

  13. Optimised polychromatic field-mediated suppression of H-atom tunnelling in a coupled symmetric double well: two-dimensional malonaldehyde model

    Science.gov (United States)

    Ghosh, Subhasree; Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki

    2015-12-01

    The cis-cis isomerisation motion of malonaldehyde can be modelled as a symmetric double well coupled with an asymmetric double well, which includes the effect of the cis-trans out-of-plane motion on the cis-cis motion. We have presented an effective method for having control over the tunnelling dynamics of the symmetric double well which is coupled with the asymmetric double well by monitoring tunnelling splitting. When a suitable external field is allowed to interact with the system, the tunnelling splitting gets modified. As the external time perturbation is periodic in nature, the Floquet theory can be applied to calculate the quasi-energies of the perturbed system and hence the tunnelling splitting. The Floquet analysis is coupled with a stochastic optimiser in order to minimise the tunnelling splitting, which is related to slowering of the tunnelling process. The minimisation has been done by one of the stochastic optimisers, simulated annealing. Optimisation has been performed on the parameters which define the external polychromatic field, such as intensities and frequencies of the components of the polychromatic field. With the optimised sets of parameters, we have followed the dynamics of the system and have found suppression of tunnelling which is manifested by a much higher tunnelling time.

  14. Performance Comparison between Optimised Camber and Span for a Morphing Wing

    Directory of Open Access Journals (Sweden)

    Christopher Simon Beaverstock

    2015-09-01

    Full Text Available Morphing technology offers a strategy to modify the wing geometry, and the wing planform and cross-sectional parameters can be optimised to the flight conditions. This paper presents an investigation into the effect of span and camber morphing on the mission performance of a 25-kg UAV, with a straight, rectangular, unswept wing. The wing is optimised over two velocities for various fixed wing and morphing wing strategies, where the objective is to maximise aerodynamic efficiency or range. The investigation analyses the effect of the low and high speed velocity selected, the weighting of the low and high velocity on the computation of the mission parameter, the maximum allowable span retraction and the weight penalty on the mission performance. Models that represent the adaptive aspect ratio (AdAR span morphing concept and the fish bone active camber (FishBAC camber morphing concept are used to investigate the effect on the wing parameters. The results indicate that generally morphing for both span and camber, the aerodynamic efficiency is maximised for a 30%–70% to 40%–60% weighting between the low and high speed flight conditions, respectively. The span morphing strategy with optimised fixed camber at the root can deliver up to 25% improvement in the aerodynamic efficiency over a fixed camber and span, for an allowable 50% retraction with a velocity range of 50–115 kph. Reducing the allowable retraction to 25% reduces the improvement to 8%–10% for a 50%–50% mission weighting. Camber morphing offers a maximum of 4.5% improvement approximately for a velocity range of 50–90 kph. Improvements in the efficiency achieved through camber morphing are more sensitive to the velocity range in the mission, generally decreasing rapidly by reducing or increasing the velocity range, where span morphing appears more robust for an increase in velocity range beyond the optimum. However, where span morphing requires considerable modification to the

  15. Patient-size-dependent radiation dose optimisation technique for abdominal CT examinations.

    Science.gov (United States)

    Ngaile, J E; Msaki, P; Kazema, R

    2012-01-01

    Since patient doses from computed tomography (CT) are relatively high, risk-benefit analysis requires dose to patients and image quality be optimised. The aim of this study was to develop a patient-dependent optimisation technique that uses patient diameter to select a combination of CT scanning parameters that minimise dose delivered to patients undergoing abdominal CT examinations. The study was performed using cylindrical phantoms of diameters ranging from 16 to 40 cm in order to establish the relationship between image degradation, CT scanning techniques, patient dose and patient size from two CT scanners. These relationships were established by scanning the phantoms using standard scanning technique followed by selected combinations of scanning parameters. The image noises through phantom images were determined using region of interest software available in both scanners. The energy depositions to the X-ray detector through phantoms were determined from measurements of CT dose index in air corrected for attenuation of the phantom materials. The results demonstrate that exposure settings (milliampere seconds) could be reduced by up to 82 % for smaller phantom relative to standard milliampere seconds, while detector signal could be reduced by up to 93 % for smaller phantom relative to energy depositions required when scanned using standard scanning protocols. It was further revealed that the use of the object-specific scanning parameters on studies performed with phantom of different diameters could reduce the incident radiation to small size object by up to 86 % to obtain the same image quality required for standard adult object. In view of the earlier mentioned fact, substantial dose saving from small-sized adults and children patients undergoing abdomen CT examinations could be achieved through optimal adjustment of CT scanning technique based on the patient transverse diameter.

  16. Mesh dependence in PDE-constrained optimisation an application in tidal turbine array layouts

    CERN Document Server

    Schwedes, Tobias; Funke, Simon W; Piggott, Matthew D

    2017-01-01

    This book provides an introduction to PDE-constrained optimisation using finite elements and the adjoint approach. The practical impact of the mathematical insights presented here are demonstrated using the realistic scenario of the optimal placement of marine power turbines, thereby illustrating the real-world relevance of best-practice Hilbert space aware approaches to PDE-constrained optimisation problems. Many optimisation problems that arise in a real-world context are constrained by partial differential equations (PDEs). That is, the system whose configuration is to be optimised follows physical laws given by PDEs. This book describes general Hilbert space formulations of optimisation algorithms, thereby facilitating optimisations whose controls are functions of space. It demonstrates the importance of methods that respect the Hilbert space structure of the problem by analysing the mathematical drawbacks of failing to do so. The approaches considered are illustrated using the optimisation problem arisin...

  17. Application of statistical experimental design for optimisation of bioinsecticides production by sporeless Bacillus thuringiensis strain on cheap medium.

    Science.gov (United States)

    Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil

    2013-01-01

    In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L(-1) starch, 30 g L(-1) soya bean and 9 g L(-1) sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.

  18. Application of statistical experimental design for optimisation of bioinsecticides production by sporeless Bacillus thuringiensis strain on cheap medium

    Directory of Open Access Journals (Sweden)

    Saoussen Ben Khedher

    2013-09-01

    Full Text Available In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L-1 starch, 30 g L-1 soya bean and 9g L-1 sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch. Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.

  19. Calcium Uptake in Crude Tissue Preparation

    Science.gov (United States)

    Bidwell, Philip A.; Kranias, Evangelia G.

    2016-01-01

    SUMMARY The various isoforms of the sarco/endoplasmic reticulum Ca2+ ATPase (SERCA) are responsible for the Ca2+ uptake from the cytosol into the endoplasmic or sarcoplasmic reticulum (ER/SR). In some tissues, the activity of SERCA can be modulated by binding partners, such as phospholamban and sarcolipin. The activity of SERCA can be characterized by its apparent affinity for Ca2+ as well as maximal enzymatic velocity. Both parameters can be effectively determined by the protocol described here. Specifically, we describe the measurement of the rate of oxalate-facilitated 45Ca uptake into the SR of crude mouse ventricular homogenates. This protocol can easily be adapted for different tissues and animal models as well as cultured cells. PMID:26695031

  20. Optimising sulfuric acid hard coat anodising for an Al-Mg-Si wrought aluminium alloy

    Science.gov (United States)

    Bartolo, N.; Sinagra, E.; Mallia, B.

    2014-06-01

    This research evaluates the effects of sulfuric acid hard coat anodising parameters, such as acid concentration, electrolyte temperature, current density and time, on the hardness and thickness of the resultant anodised layers. A small scale anodising facility was designed and set up to enable experimental investigation of the anodising parameters. An experimental design using the Taguchi method to optimise the parameters within an established operating window was performed. Qualitative and quantitative methods of characterisation of the resultant anodised layers were carried out. The anodised layer's thickness, and morphology were determined using a light optical microscope (LOM) and field emission gun scanning electron microscope (FEG-SEM). Hardness measurements were carried out using a nano hardness tester. Correlations between the various anodising parameters and their effect on the hardness and thickness of the anodised layers were established. Careful evaluation of these effects enabled optimum parameters to be determined using the Taguchi method, which were verified experimentally. Anodised layers having hardness varying between 2.4-5.2 GPa and a thickness of between 20-80 μm were produced. The Taguchi method was shown to be applicable to anodising. This finding could facilitate on-going and future research and development of anodising, which is attracting remarkable academic and industrial interest.

  1. Heat exchanger design based on economic optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Caputo, Antonio C.; Pelagagge, Pacifico M.; Salini, Paolo [University of L' Aquila, Engineering Faculty, Monteluco di Roio 67100, L' Aquila (Italy)

    2008-07-15

    Owing to the wide utilization of heat exchangers in industrial processes, their cost minimization is an important target for both designers and users. Traditional design approaches are based on iterative procedures which gradually change design parameters until a satisfying solution, which meets the design specifications, is reached. However, such methods, besides being time consuming, do not guarantee the reach of an economically optimal solution. In this paper a procedure for optimal design of shell and tube heat exchangers is proposed, which utilizes a genetic algorithm to minimize the total cost of the equipment including capital investment and the sum of discounted annual energy expenditures related to pumping. In order to verify the capability of the proposed method, three case studies are also presented showing that significant cost reductions are feasible with respect to traditionally designed exchangers. In particular, in the examined cases a reduction of total costs up to more than 50% was observed. (author)

  2. Optimisation of ATP determination in drinking water

    DEFF Research Database (Denmark)

    Corfitzen, Charlotte B.; Albrechtsen, Hans-Jørgen

    Adenosine Triphosphate (ATP) can be used as a relative measure of cell activity, and is measured by the light output from the reaction between luciferin and ATP catalyzed by firefly luciferase. The measurement has potential as a monitoring and surveillance tool within drinking water distribution......, since the method is very sensitive (detects 0.5 ng ATP/L) and results are obtained within minutes. When calculating the ATP value a number of parameters need to be considered. These were investigate by use of two different reagent kits (PCP-kit and Lumin(ATE)/Lumin(EX)-kit), internal standard...... and an Advance Coupe luminometer. The investigations showed a 60 times higher response of the PCP-kit, making it more suitable for measurement of samples with low ATP content. ATP-standard dilutions prepared in tap water were stable for at least 15 months when stored frozen at -80ºC, and storage of large...

  3. Optimisation of ATP determination in drinking water

    DEFF Research Database (Denmark)

    Corfitzen, Charlotte B.; Albrechtsen, Hans-Jørgen

    Adenosine Triphosphate (ATP) can be used as a relative measure of cell activity, and is measured by the light output from the reaction between luciferin and ATP catalyzed by firefly luciferase. The measurement has potential as a monitoring and surveillance tool within drinking water distribution......, since the method is very sensitive (detects 0.5 ng ATP/L) and results are obtained within minutes. When calculating the ATP value a number of parameters need to be considered. These were investigate by use of two different reagent kits (PCP-kit and Lumin(ATE)/Lumin(EX)-kit), internal standard...... and an Advance Coupe luminometer. The investigations showed a 60 times higher response of the PCP-kit, making it more suitable for measurement of samples with low ATP content. ATP-standard dilutions prepared in tap water were stable for at least 15 months when stored frozen at -80ºC, and storage of large...

  4. Design Optimisation of Parachute Sequencer Mechanism

    Directory of Open Access Journals (Sweden)

    C. M. Kulkarni

    1992-01-01

    Full Text Available Fragment hit density and hit probability of the warhead are the critical parameters in the selection of a preformed fragment-type missile warhead against ground targets. Hence these factors are to be maximised. The parametric studies of these factors have lead to a new concept of variable mass preformed fragmented (VMPF warhead. A philosophy was evolved for the VMPF-type missile warheads. A computer software for generating the external configuration of the VMPF-type missile warhead was developed and basic algorithm is discussed in this paper. With this new design approach, the fragment hit density and hit probability were improved considerably in the shorter ranges, when compared to that of a uniform mass preformed fragmented warhead of conventional design.

  5. Optimised resource construction for verifiable quantum computation

    Science.gov (United States)

    Kashefi, Elham; Wallden, Petros

    2017-04-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph.

  6. Uptake of nuclides by plants

    Energy Technology Data Exchange (ETDEWEB)

    Greger, Maria [Stockholm Univ. (Sweden). Dept. of Botany

    2004-04-01

    This review on plant uptake of elements has been prepared to demonstrate how plants take up different elements. The work discusses the nutrient elements, as well as the general uptake and translocation in plants, both via roots and by foliar absorption. Knowledge of the uptake by the various elements within the periodic system is then reviewed. The work also discusses transfer factors (TF) as well as difficulties using TF to understand the uptake by plants. The review also focuses on species differences. Knowledge necessary to understand and calculate plant influence on radionuclide recirculation in the environment is discussed, in which the plant uptake of a specific nuclide and the fate of that nuclide in the plant must be understood. Plants themselves determine the uptake, the soil/sediment determines the availability of the nuclides and the nuclides themselves can interact with each other, which also influences the uptake. Consequently, it is not possible to predict the nuclide uptake in plants by only analysing the nuclide concentration of the soil/substrate.

  7. Optimising steel production schedules via a hierarchical genetic algorithm

    Directory of Open Access Journals (Sweden)

    Worapradya, Kiatkajohn

    2014-08-01

    Full Text Available This paper presents an effective scheduling in a steel-making continuous casting (SCC plant. The main contribution of this paper is the formulation of a new optimisation model that more closely represents real-world situations, and a hierarchical genetic algorithm (HGA tailored particularly for searching for an optimal SCC schedule. The optimisation model is developed by integrating two main planning phases of traditional scheduling: (1 planning cast sequence, and (2 scheduling of steel-making and timing of all jobs. A novel procedure is given for genetic algorithm (GA chromosome coding that maps Gantt chart and hierarchical chromosomes. The performance of the proposed methodology is illustrated and compared with a two-phase traditional scheduling and a standard GA toolbox. Both qualitative and quantitative performance measures are investigated.

  8. CAE process to simulate and optimise engine noise and vibration

    Science.gov (United States)

    Junhong, Zhang; Jun, Han

    2006-08-01

    The vibratory and acoustic behaviour of the internal combustion engine is a highly complex one, consisting of many components that are subject to loads that vary greatly in magnitude and which operate at wide range of speed. CAE tools development will lead to a significant reduction in the duration of the development period for engine as well as ensure a dramatic increase in product quality. This paper presents today's state-of-the-art CAE capabilities in the simulation of the dynamic and acoustic behaviour of engine and focuses on the relative merits of modification and full-scale structural/acoustic optimisation of engine, together with the creation of new low-noise designs. Modern CAE tools allow the analysis, assessment and acoustic optimisation of the engine.

  9. Optimisation of metal charge material for electric arc furnace

    Directory of Open Access Journals (Sweden)

    T. Lis

    2011-10-01

    Full Text Available The analysis of the changes in the crude steel production volumes implies gradual increase of production since the mid 20th century. This tendency has been slightly hampered by the economic depression. At the same time, the market requirements enforce improvement of the quality of the products manufactured on simultaneous minimisation of the production costs. One of the tools applied to solve these problems is mathematical optimisation. The author of this paper has presented an example of application of the multi-criteria optimisation method to improvement of efficiency of steel smelting in an electric arc furnace (EAF through appropriate choice of the charge scrap. A measurable effect of applying such a methodology of choosing the metal charge is the ability to reduce the unit cost of steel smelting.

  10. Compressed Sensing with Nonlinear Observations and Related Nonlinear Optimisation Problems

    CERN Document Server

    Blumensath, Thomas

    2012-01-01

    Non-convex constraints have recently proven a valuable tool in many optimisation problems. In particular sparsity constraints have had a significant impact on sampling theory, where they are used in Compressed Sensing and allow structured signals to be sampled far below the rate traditionally prescribed. Nearly all of the theory developed for Compressed Sensing signal recovery assumes that samples are taken using linear measurements. In this paper we instead address the Compressed Sensing recovery problem in a setting where the observations are non-linear. We show that, under conditions similar to those required in the linear setting, the Iterative Hard Thresholding algorithm can be used to accurately recover sparse or structured signals from few non-linear observations. Similar ideas can also be developed in a more general non-linear optimisation framework. In the second part of this paper we therefore present related result that show how this can be done under sparsity and union of subspaces constraints, wh...

  11. Optimisation of VSC-HVDC Transmission for Wind Power Plants

    DEFF Research Database (Denmark)

    Silva, Rodrigo Da

    Connection of Wind Power Plants (WPP), typically oshore, using VSCHVDC transmission is an emerging solution with many benefits compared to the traditional AC solution, especially concerning the impact on control architecture of the wind farms and the grid. The VSC-HVDC solution is likely to meet...... more stringent grid codes than a conventional AC transmission connection. The purpose of this project is to analyse how HVDC solution, considering the voltage-source converter based technology, for grid connection of large wind power plants can be designed and optimised. By optimisation, the project...... the requirements established by the operators in the multiterminal VSC-HVDC transmission system. Moreover, the possibility in minimising the overall transmission losses can be a solution for small grids and the minimisation in the dispatch error is a new solution for power deliver maximisation. The second study...

  12. Comparing and Optimising Parallel Haskell Implementations for Multicore Machines

    DEFF Research Database (Denmark)

    Berthold, Jost; Marlow, Simon; Hammond, Kevin

    2009-01-01

    by our testing: for example, we implemented a work-stealing approach to task allocation. Our optimisations improved the performance of the shared-heap GpH implementation by as much as 30% on eight cores. Secondly, the shared heap approach is, rather surprisingly, not superior to a distributed heap......H implementation investigated here uses a physically-shared heap, which should be well-suited to multicore architectures. In contrast, the Eden implementation adopts an approach that has been designed for use on distributed-memory parallel machines: a system of multiple, independent heaps (one per core......), with inter-core communication handled by message-passing rather than through shared heap cells. We report two main results. Firstly, we report on the effect of a number of optimisations that we applied to the shared-memory GpH implementation in order to address some performance issues that were revealed...

  13. Optimising stroke volume and oxygen delivery in abdominal aortic surgery

    DEFF Research Database (Denmark)

    Bisgaard, J; Gilsaa, T; Rønholm, E

    2012-01-01

    group, stroke volume was optimised by 250 ml colloid boluses intraoperatively and for the first 6 h post-operatively. The optimisation aimed at an oxygen delivery of 600 ml/min/m(2) in the post-operative period. Haemodynamic data were collected at pre-defined time points, including baseline......BACKGROUND: Post-operative complications after open elective abdominal aortic surgery are common, and individualised goal-directed therapy may improve outcome in high-risk surgery. We hypothesised that individualised goal-directed therapy, targeting stroke volume and oxygen delivery, can reduce......, intraoperatively and post-operatively. Patients were followed up for 30 days. RESULTS: Stroke volume index and oxygen delivery index were both higher in the post-operative period in the intervention group. In this group, 27 of 32 achieved the post-operative oxygen delivery index target vs. 18 of 32 in the control...

  14. Biomass supply chain optimisation for Organosolv-based biorefineries.

    Science.gov (United States)

    Giarola, Sara; Patel, Mayank; Shah, Nilay

    2014-05-01

    This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs.

  15. Optimised Calibration Method for Six-Port Junction

    Institute of Scientific and Technical Information of China (English)

    XIONG Xiang-zheng; LIAO Cheng; XIAO Hua-qing

    2008-01-01

    A dual-tone technique is used to produce multi-samples in optimising calibration of six-port junction. More accurate results are achieved by using the least-square method and excluding those samples which may cause bigger errors. A 0.80 1.10 GHz microwave integrated circuit (MIC) six-port reflectometer is constructed. Nine test samples are used in the measurement. With Engens calibration procedure, the difference between the HP8510 and the six-port reflectrometer is in the order of 0.20 dB/1.5° for most cases, above 0.50 dB/5.0° at boundary frequency . With the optimised method, the difference is less than 0.10 dB/1.0° for most cases, and the biggest error is 0.42 dB/2.1° for boundary frequencies.

  16. An Experimental Approach for Optimising Mobile Agent Migrations

    CERN Document Server

    Gavalas, Damianos

    2010-01-01

    The field of mobile agent (MA) technology has been intensively researched during the past few years, resulting in the phenomenal proliferation of available MA platforms, all sharing several common design characteristics. Research projects have mainly focused on identifying applications where the employment of MAs is preferable compared to centralised or alternative distributed computing models. Very little work has been made on examining how MA platforms design can be optimised so as the network traffic and latency associated with MA transfers are minimised. The work presented in this paper addresses these issues by investigating the effect of several optimisation ideas applied on our MA platform prototype. Furthermore, we discuss the results of a set of timing experiments that offers a better understanding of the agent migration process and recommend new techniques for reducing MA transfers delay.

  17. Topology Optimised Broadband Photonic Crystal Y-Splitter

    DEFF Research Database (Denmark)

    Borel, Peter Ingo; Frandsen, Lars Hagedorn; Harpøth, Anders

    2005-01-01

    A planar photonic crystal waveguide Y-splitter that exhibits large-bandwidth low-loss 3 dB splitting for TE-polarised light has been fabricated in silicon-on-insulator material. The high performance is achieved by utilising topology optimisation to design the Y-junction and by using topology...... optimised low-loss 60° bends. The average excess loss of the entire component is found to be 0.44±0.29 dB for a 100 nm bandwidth, and the excess loss due to the Y-junction is found to be 0.34±0.30 dB in a 175 nm bandwidth....

  18. Development and optimisation of electrodematerials in solid oxide fuel cells

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Solid oxide fuel cell (SOFC) is an all solid electrochemical device to convert fuels such as hydrogen and natural gas to electricity with high efficiency and very low greenhouse gas emission compared to traditional thermal power generation plants. Moreover, the reliability and efficiency of SOFC is critically dependent on the performance and stability of its components including anode, cathode and electrolyte. This in turn is largely dependent on the material selection and the fabrication processes. In this paper, specific examples are given to demonstrate strategy and process in the development and optimisation of electrode materials such as Ni/Y2O3-ZrO2 cermet anodes and (LaSr)MnO3 based cathodes. The results also demonstrate the importance of fabrication processes and that the understanding of the electrode process plays a very important role in the optimisation process of electrode materials.

  19. Computed tomography dose optimisation in cystic fibrosis: A review.

    LENUS (Irish Health Repository)

    Ferris, Helena

    2016-04-28

    Cystic fibrosis (CF) is the most common autosomal recessive disease of the Caucasian population worldwide, with respiratory disease remaining the most relevant source of morbidity and mortality. Computed tomography (CT) is frequently used for monitoring disease complications and progression. Over the last fifteen years there has been a six-fold increase in the use of CT, which has lead to a growing concern in relation to cumulative radiation exposure. The challenge to the medical profession is to identify dose reduction strategies that meet acceptable image quality, but fulfil the requirements of a diagnostic quality CT. Dose-optimisation, particularly in CT, is essential as it reduces the chances of patients receiving cumulative radiation doses in excess of 100 mSv, a dose deemed significant by the United Nations Scientific Committee on the Effects of Atomic Radiation. This review article explores the current trends in imaging in CF with particular emphasis on new developments in dose optimisation.

  20. Optimisation extraction of chondroitin sulfate from fish bone by high intensity pulsed electric fields.

    Science.gov (United States)

    He, Guidan; Yin, Yongguang; Yan, Xiaoxia; Yu, Qingyu

    2014-12-01

    High intensity pulsed electric fields (PEF) was used to extract chondroitin sulphate (CS) from fish bone. Results show that PEF extraction speed is much faster, and the content of CS is much higher compared with traditional methods. Variation of PEF parameters and the content of CS were determined by single factor experiments. The processing conditions were optimised by quadratic general rotary unitised design experiments. The maximum yield of 6.92 g/L was achieved under the following conditions: material-liquid ratio of 1:15 g/mL, electric field intensity of 16.88 kV/cm, pulse number of 9, and NaOH concentration of 3.24%. The purity of CS was analysed by agarose gel electrophoresis. CS purity was high, and the extract did not contain any other glycosaminoglycans. PEF can be widely used to extract CS with non-thermal performance, high speed, and low pollution.

  1. Preliminary results on optimisation of gas flow rate for ICAL RPCs

    Energy Technology Data Exchange (ETDEWEB)

    Bhuyan, M.R.; Kalmani, S.D.; Mondal, N.K.; Pal, S., E-mail: sumanta@tifr.res.in; Samuel, D.; Satyanarayana, B., E-mail: bsn@tifr.res.in; Shinde, R.R.

    2014-02-01

    The India-based Neutrino Observatory (INO) collaboration is planning to build a magnetised Iron CALorimeter (ICAL) detector to study atmospheric neutrinos and to measure their oscillation parameters. The ICAL will use 50 kton iron as target mass and about 28,800 Resistive Plate Chambers (RPCs) of 2×2 m{sup 2} in area as active detector elements. As part of its R and D programme, a stack of 12 glass RPCs of 1×1 m{sup 2} in area has been set up to study and characterise the performance of the RPCs. In this paper, we describe our study on the optimisation of gas flow through the RPCs. We conclude that the refreshing frequency of gas can be reduced by a factor of 30 with leak free RPCs, without compromising performance of the RPCs.

  2. Optimisation of shimmy suppression device in an aircraft main landing gear

    Science.gov (United States)

    Li, Yuan; Jiang, Jason Zheng; Neild, Simon

    2016-09-01

    In earlier publications of landing gear shimmy analysis, efforts have concentrated on predicting the onset of shimmy instability and investigating how to stabilise shimmy-prone landing gears. Less attention has been given to the improvements of shimmy performance for a gear that is free from dynamic instability. This is the main interest of this work. We investigate the effectiveness of a linear passive mechanical device that consists of springs, dampers and inerters on suppressing landing gear shimmy oscillations. A linear model of a Fokker 100 main landing gear and two configurations of candidate shimmy suppression device have been presented. Considering the physical shimmy motions, time-domain optimisation of the parameters in the shimmy suppression devices, using a cost function of maximum amplitude of gear torsional-yaw motion, has been carried out. The performance advantage of a shimmy suppression device incorporating inerter has been presented.

  3. Review of local herbal compounds found in the Iranian traditional medicine known to optimise male fertility.

    Science.gov (United States)

    Nejatbakhsh, F; Shirbeigi, L; Rahimi, R; Abolhassani, H

    2016-10-01

    The male reproductive function can be influenced by many different factors, including genetic, environmental and socioeconomic parameters leading to a progressive decline. However, the cause of infertility cannot be found in a significant proportion of couples, and even with the presence of the sign of testicular dysfunction or obstructive azoospermia, the main aetiology is not identified. In the absence of knowledge about predisposing factor, targeted therapeutic modalities for male infertility may not be possible, and a wide variety of empiric drug approaches, even with low scientific evidence, have been utilised in current conventional medicine. According to the recently updated reports of the European Association of Urology guidelines on male infertility, the implication of previous recommendations and complementary alternative medicine based on the old literature has been suggested to improve a multifaceted integrative therapeutic approach for this disease. We have reviewed the potential herbal active compounds optimising male fertility, according to the principles of Iranian traditional medicine.

  4. Robust optimisation for self-scheduling and bidding strategies of hybrid CSP-fossil power plants

    DEFF Research Database (Denmark)

    Pousinho, H.M.I.; Contreras, J.; Pinson, P.

    2015-01-01

    This paper describes a profit-maximisation model for a hybrid concentrated solar power (CSP) producer participating in a day-ahead market with bilateral contracts, where there is no correlation between the electricity market price and the solar irradiation. Backup system coordination is included...... between the molten-salt thermal energy storage (TES) and a fossil-fuel backup to overcome solar irradiation insufficiency, but with emission allowances constrained in the backup system to mitigate carbon footprint. A robust optimisation-based approach is proposed to provide the day-ahead self......-schedule under the worst-case realisation of uncertainties due to the electricity market prices and the thermal production from the solar field (SF). These uncertainties are modelled by asymmetric prediction intervals around average values. Additionally, a budget parameter is used to parameterise the degree...

  5. Use of ultrasonic baths for analytical applications: a new approach for optimisation conditions

    Directory of Open Access Journals (Sweden)

    Nascentes Clésia C.

    2001-01-01

    Full Text Available Optimisation conditions for obtaining maximum cavitation intensity in ultrasonic baths are proposed using a simple and fast method. Parameters such as water volume, temperature, detergent concentration, horizontal and vertical positions, number of tubes in the bath, sonication time and bath water substitution were studied. The results obtained for both baths studied (Neytech and Cole-Parmer lead to the following conditions for maximum cavitation intensity: 1 L of water at room temperature, 0.2 % (v/v of detergent, central position on the bottom of the tank. Only one tube at a time should be used inside the bath during the ultrasound application. The cavitation intensity was linear with the sonication time up to 10 minutes and the water substitution during the sonication improved reproducibility. This system using continuous water change makes possible the sonication of 6 consecutive samples, without changes in the water volume.

  6. Ant colony optimisation for scheduling of flexible job shop with multi-resources requirements

    Directory of Open Access Journals (Sweden)

    Kalinowski Krzysztof

    2017-01-01

    Full Text Available The paper presents application of ant colony optimisation algorithm for scheduling multi-resources operations in flexible job shop type of production systems. Operations that require the participation of two or more resources are common in industrial practice, when planning are subject not only machines, but also other additional resources (personnel, tools, etc.. Resource requirements of operation are indicated indirectly by resource groups. The most important parameters of the resource model and resource groups are also described. A basic assumptions for ant colony algorithm used for scheduling in the considered model with multiresources requirements of operations is discussed. The main result of the research is the schema of metaheuristic that enables searching best-score solutions in manufacturing systems satisfying presented constraints.

  7. A TECHNICAL NOTE ON GRANULATION TECHNOLOGY: A WAY TO OPTIMISE GRANULES

    Directory of Open Access Journals (Sweden)

    Mahammed Athar A. Saikh

    2013-01-01

    Full Text Available To provide an updated technical note on granulation technology (GT, mostly on novel GT, that will help researcher working/engaged in designing an efficient GT for getting granules with desired features. Granules were most widely used in the production of pharmaceutical oral dosage forms. Advancement in GT had revolutionized the sphere and resulted in development of several processes. Each and every process had advantages and disadvantages, and limitations. Depth knowledge in GT was a prerequisite to process product for obtaining targeted granulation with desired product parameters. In this regards updated literatures were collected from data bases, studied and was presented for easy reference of scientists engaged in granule production, so that they can adopt appropriate and suitable GT. Presented handy note will help researchers in designing a robust GT for getting optimised granule.

  8. Alginate microencapsulated hepatocytes optimised for transplantation in acute liver failure.

    Directory of Open Access Journals (Sweden)

    Suttiruk Jitraruch

    Full Text Available BACKGROUND AND AIM: Intraperitoneal transplantation of alginate-microencapsulated human hepatocytes is an attractive option for the management of acute liver failure (ALF providing short-term support to allow native liver regeneration. The main aim of this study was to establish an optimised protocol for production of alginate-encapsulated human hepatocytes and evaluate their suitability for clinical use. METHODS: Human hepatocyte microbeads (HMBs were prepared using sterile GMP grade materials. We determined physical stability, cell viability, and hepatocyte metabolic function of HMBs using different polymerisation times and cell densities. The immune activation of peripheral blood mononuclear cells (PBMCs after co-culture with HMBs was studied. Rats with ALF induced by galactosamine were transplanted intraperitoneally with rat hepatocyte microbeads (RMBs produced using a similar optimised protocol. Survival rate and biochemical profiles were determined. Retrieved microbeads were evaluated for morphology and functionality. RESULTS: The optimised HMBs were of uniform size (583.5±3.3 µm and mechanically stable using 15 min polymerisation time compared to 10 min and 20 min (p<0.001. 3D confocal microscopy images demonstrated that hepatocytes with similar cell viability were evenly distributed within HMBs. Cell density of 3.5×10(6 cells/ml provided the highest viability. HMBs incubated in human ascitic fluid showed better cell viability and function than controls. There was no significant activation of PBMCs co-cultured with empty or hepatocyte microbeads, compared to PBMCs alone. Intraperitoneal transplantation of RMBs was safe and significantly improved the severity of liver damage compared to control groups (empty microbeads and medium alone; p<0.01. Retrieved RMBs were intact and free of immune cell adherence and contained viable hepatocytes with preserved function. CONCLUSION: An optimised protocol to produce GMP grade alginate

  9. Integrated planning tool for optimisation in municipal home care

    OpenAIRE

    Røhne, Mette; Sandåker, Torjus; Ausen, Dag; Grut, Lisbet

    2016-01-01

    Purpose: The objective is to improve collaboration and enhance quality of care services in municipal, home care services by implementing and developing an integrated planning tool making use of optimisation technology for better decision support. The project will through piloting and action based research establish knowledge on change in work processes to improve collaboration and efficiency. Context: A planning tool called Spider has been piloted in home care in Horten municipality since 201...

  10. Topology Optimisation for Energy Management in Underwater Sensor Networks

    Science.gov (United States)

    2015-01-01

    International Journal of Control, 2015 http://dx.doi.org/10.1080/00207179.2015.1017006 Topology optimisation for energy management in underwater...State University, University Park , PA 16802-1412, USA; bNaval Undersea Warfare Center, Newport, RI 02841-1708, USA and Department of Mechanical & Nuclear...Engineering, Pennsylvania State University, University Park , PA 16802-1412, USA; cUnited Technology Research Center, Cork, Ireland (Received 28

  11. Optimising automation of a manual enzyme-linked immunosorbent assay

    OpenAIRE

    Corena de Beer; Monika Esser; Wolfgang Preiser

    2011-01-01

    Objective: Enzyme-linked immunosorbent assays (ELISAs) are widely used to quantify immunoglobulin levels induced by infection or vaccination. Compared to conventional manual assays, automated ELISA systems offer more accurate and reproducible results, faster turnaround times and cost effectiveness due to the use of multianalyte reagents.Design: The VaccZyme™ Human Anti-Haemophilus influenzae type B (Hib) kit (MK016) from The Binding Site Company was optimised to be used on an automated BioRad...

  12. Using Lean principles to optimise inpatient phlebotomy services.

    Science.gov (United States)

    Le, Rachel D; Melanson, Stacy E F; Santos, Katherine S; Paredes, Jose D; Baum, Jonathan M; Goonan, Ellen M; Torrence-Hill, Joi N; Gustafson, Michael L; Tanasijevic, Milenko J

    2014-08-01

    In the USA, inpatient phlebotomy services are under constant operational pressure to optimise workflow, improve timeliness of blood draws, and decrease error in the context of increasing patient volume and complexity of work. To date, the principles of Lean continuous process improvement have been rarely applied to inpatient phlebotomy. To optimise supply replenishment and cart standardisation, communication and workload management, blood draw process standardisation, and rounding schedules and assignments using Lean principles in inpatient phlebotomy services. We conducted four Lean process improvement events and implemented a number of interventions in inpatient phlebotomy over a 9-month period. We then assessed their impact using three primary metrics: (1) percentage of phlebotomists drawing their first patient by 05:30 for 05:00 rounds, (2) percentage of phlebotomists completing 08:00 rounds by 09:30, and (3) number of errors per 1000 draws. We saw marked increases in the percentage of phlebotomists drawing their first patient by 05:30, and the percentage of phlebotomists completing rounds by 09:30 postprocess improvement. A decrease in the number of errors per 1000 draws was also observed. This study illustrates how continuous process improvement through Lean can optimise workflow, improve timeliness, and decrease error in inpatient phlebotomy. We believe this manuscript adds to the field of clinical pathology as it can be used as a guide for other laboratories with similar goals of optimising workflow, improving timeliness, and decreasing error, providing examples of interventions and metrics that can be tailored to specific laboratories with particular services and resources. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. Integrated planning tool for optimisation in municipal home care

    OpenAIRE

    Røhne, Mette; Sandåker, Torjus; Ausen, Dag; Grut, Lisbet

    2016-01-01

    Purpose: The objective is to improve collaboration and enhance quality of care services in municipal, home care services by implementing and developing an integrated planning tool making use of optimisation technology for better decision support. The project will through piloting and action based research establish knowledge on change in work processes to improve collaboration and efficiency. Context: A planning tool called Spider has been piloted in home care in Horten municipality since 201...

  14. Optimising the Target and Capture Sections of the Neutrino Factory

    CERN Document Server

    Hansen, Ole Martin; Stapnes, Steinar

    The Neutrino Factory is designed to produce an intense high energy neutrino beam from stored muons. The majority of the muons are obtained from the decay of pions, produced by a proton beam impinging on a free-flowing mercury-jet target and captured by a high magnetic field. It is important to capture a large fraction of the produced pions to maximize the intensity of the neutrino beam. Various optimisation studies have been performed with the aim of maximising the muon influx to the accelerator and thus the neutrino beam intensity. The optimisation studies were performed with the use of Monte Carlo simulation tools. The production of secondary particles, by interactions between the incoming proton beam and the mercury target, was optimised by varying the proton beam impact position and impact angles on the target. The proton beam and target interaction region was studied and showed to be off the central axis of the capture section in the baseline configuration. The off-centred interaction region resulted in ...

  15. Direct and Indirect Gradient Control for Static Optimisation

    Institute of Scientific and Technical Information of China (English)

    Yi Cao

    2005-01-01

    Static "self-optimising" control is an important concept, which provides a link between static optimisation and control[1]. According to the concept, a dynamic control system could be configured in such a way that when a set of certain variables are maintained at their setpoints, the overall process operation is automatically optimal or near optimal at steadystate in the presence of disturbances. A novel approach using constrained gradient control to achieve "self-optimisation" has been proposed by Cao[2]. However, for most process plants, the information required to get the gradient measure may not be available in real-time. In such cases, controlled variable selection has to be carried out based on measurable candidates. In this work, the idea of direct gradient control has been extended to controlled variable selection based on gradient sensitivity analysis (indirect gradient control). New criteria, which indicate the sensitivity of the gradient function to disturbances and implementation errors, have been derived for selection. The particular case study shows that the controlled variables selected by gradient sensitivity measures are able to achieve near optimal performance.

  16. Optimisation of Lilla Edet Landslide GPS Monitoring Network

    Science.gov (United States)

    Alizadeh-Khameneh, M. A.; Eshagh, M.; Sjöberg, L. E.

    2015-06-01

    Since the year 2000, some periodic investigations have been performed in the Lilla Edet region to monitor and possibly determine the landslide of the area with GPS measurements. The responsible consultant has conducted this project by setting up some stable stations for GPS receivers in the risky areas of Lilla Edet and measured the independent baselines amongst the stations according to their observation plan. Here, we optimise the existing surveying network and determine the optimal configuration of the observation plan based on different criteria.We aim to optimise the current network to become sensitive to detect 5 mm possible displacements in each net point. The network quality criteria of precision, reliability and cost are used as object functions to perform single-, bi- and multi-objective optimisation models. It has been shown in the results that the single-objective model of reliability, which is constrained to the precision, provides much higher precision than the defined criterion by preserving almost all of the observations. However, in this study, the multi-objective model can fulfil all the mentioned quality criteria of the network by 17% less measurements than the original observation plan, meaning 17%of saving time, cost and effort in the project.

  17. Module detection in complex networks using integer optimisation

    Directory of Open Access Journals (Sweden)

    Tsoka Sophia

    2010-11-01

    Full Text Available Abstract Background The detection of modules or community structure is widely used to reveal the underlying properties of complex networks in biology, as well as physical and social sciences. Since the adoption of modularity as a measure of network topological properties, several methodologies for the discovery of community structure based on modularity maximisation have been developed. However, satisfactory partitions of large graphs with modest computational resources are particularly challenging due to the NP-hard nature of the related optimisation problem. Furthermore, it has been suggested that optimising the modularity metric can reach a resolution limit whereby the algorithm fails to detect smaller communities than a specific size in large networks. Results We present a novel solution approach to identify community structure in large complex networks and address resolution limitations in module detection. The proposed algorithm employs modularity to express network community structure and it is based on mixed integer optimisation models. The solution procedure is extended through an iterative procedure to diminish effects that tend to agglomerate smaller modules (resolution limitations. Conclusions A comprehensive comparative analysis of methodologies for module detection based on modularity maximisation shows that our approach outperforms previously reported methods. Furthermore, in contrast to previous reports, we propose a strategy to handle resolution limitations in modularity maximisation. Overall, we illustrate ways to improve existing methodologies for community structure identification so as to increase its efficiency and applicability.

  18. Customisable 3D printed microfluidics for integrated analysis and optimisation.

    Science.gov (United States)

    Monaghan, T; Harding, M J; Harris, R A; Friel, R J; Christie, S D R

    2016-08-16

    The formation of smart Lab-on-a-Chip (LOC) devices featuring integrated sensing optics is currently hindered by convoluted and expensive manufacturing procedures. In this work, a series of 3D-printed LOC devices were designed and manufactured via stereolithography (SL) in a matter of hours. The spectroscopic performance of a variety of optical fibre combinations were tested, and the optimum path length for performing Ultraviolet-visible (UV-vis) spectroscopy determined. The information gained in these trials was then used in a reaction optimisation for the formation of carvone semicarbazone. The production of high resolution surface channels (100-500 μm) means that these devices were capable of handling a wide range of concentrations (9 μM-38 mM), and are ideally suited to both analyte detection and process optimisation. This ability to tailor the chip design and its integrated features as a direct result of the reaction being assessed, at such a low time and cost penalty greatly increases the user's ability to optimise both their device and reaction. As a result of the information gained in this investigation, we are able to report the first instance of a 3D-printed LOC device with fully integrated, in-line monitoring capabilities via the use of embedded optical fibres capable of performing UV-vis spectroscopy directly inside micro channels.

  19. MOPTOP: a multi-colour optimised optical polarimeter

    Science.gov (United States)

    Jermak, Helen; Steele, Iain A.; Smith, Robert J.

    2016-08-01

    We present the design and science case for the Liverpool Telescope's fourth-generation polarimeter; MOPTOP: a Multicolour OPTimised Optical Polarimeter which is optimised for sensitivity and bi-colour observations. We introduce an optimised polarimeter which is as far as possible limited only by the photon counting efficiency of the detectors. Using a combination of CMOS cameras, a continuously rotating half-wave plate and a wire grid polarising beamsplitter, we predict we can accurately measure the polarisation of sources to 1% at 19th magnitude in 10 minutes on a 2 metre telescope. For brighter sources we anticipate much low systematics (design also gives the ability to measure polarization and photometric variability on timescales as short as a few seconds. Overall the instrument will allow accurate measurements of the intra-nightly variability of the polarisation of sources such as gamma-ray bursts and blazars (AGN orientated with the jet pointing toward the observer), allowing the constraint of magnetic field models revealing more information about the formation, ejection and collimation of jets.

  20. Crystal structure optimisation using an auxiliary equation of state.

    Science.gov (United States)

    Jackson, Adam J; Skelton, Jonathan M; Hendon, Christopher H; Butler, Keith T; Walsh, Aron

    2015-11-14

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  1. Crystal structure optimisation using an auxiliary equation of state

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T. [Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Walsh, Aron, E-mail: a.walsh@bath.ac.uk [Centre for Sustainable Chemical Technologies and Department of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Global E" 3 Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749 (Korea, Republic of)

    2015-11-14

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy–volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other “beyond” density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu{sub 2}ZnSnS{sub 4} and the magnetic metal-organic framework HKUST-1.

  2. Honeybee economics: optimisation of foraging in a variable world.

    Science.gov (United States)

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  3. Software Toolbox Development for Rapid Earthquake Source Optimisation Combining InSAR Data and Seismic Waveforms

    Science.gov (United States)

    Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.

    2017-04-01

    We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG

  4. Optimisation of hydrocortisone production by Curvularia lunata.

    Science.gov (United States)

    Lu, Wenyu; Du, Lianxiang; Wang, Min; Jia, Xiaoqiang; Wen, Jianping; Huang, Yuping; Guo, Yawen; Gong, Wei; Bao, Huike; Yang, Jing; Sun, Bing

    2007-07-01

    A new method for breeding the hydrocortisone overproducing strain Curvularia lunata by screening ketoconazole-resistance mutant was developed. A hydrocortisone overproducing mutant C. lunata KA-91 with ketoconazole-resistance marker was obtained from protoplasts treated with ultraviolet radiation. The hydrocortisone conversion rate of C. lunata KA-91 was increased by 42.1% compared to the original strain CL-114 at the substrate 17alpha-hydroxypregn-4-en-3, 20-dione-21-acetate addition concentration of 1.0 g/L. The by-products produced by KA-91 were fewer than those of the original strain. It was assumed that the higher cytochrome P450 content of ketoconazole-resistance mutant resulted in the increase of 11beta-hydroxylation capacity. The culture conditions for biotransformation of 17alpha-hydroxypregn-4-en-3, 20-dione-21-acetate to hydrocortisone were optimized by response surface methodology. Plackett-Burman design was applied to elucidate the key factors affecting the hydrocortisone production, and the results indicated that glucose, initial pH, and glucose to total nitrogen sources ratio (omega) had significant effects on hydrocortisone production. Box-Behnken design was employed to search for the optimal parameters of those three key factors. According to the model, the trial checking at the optimal conditions showed a high hydrocortisone conversion rate of 82.67%.

  5. FEL gain optimisation and spontaneous radiation

    Energy Technology Data Exchange (ETDEWEB)

    Bali, L.M.; Srivastava, A.; Pandya, T.P. [Lucknow Univ. (India)] [and others

    1995-12-31

    Colson have evaluated FEL gains for small deviations from perfect electron beam injection, with radiation of the same polarisation as that of the wiggler fields. We find that for optimum gain the polarisation of the optical field should be the same as that of the spontaneous emission under these conditions. With a helical wiggler the axial oscillations resulting from small departures from perfect electron beam injection lead to injection dependent unequal amplitudes and phases of the spontaneous radiation in the two transverse directions. Viewed along the axis therefore the spontaneous emission is elliptically polarised. The azimuth of the ellipse varies with the difference of phase of the two transverse components of spontaneous emission but the eccentricity remains the same. With planar wigglers the spontaneous emission viewed in the axial direction is linearly polarised, again with an injection dependent azimuth. For optimum coherent gain of a radiation field its polarisation characteristics must be the same as those of the spontaneous radiation with both types of wiggler. Thus, with a helical wiggler and the data reported earlier, an increase of 10% in the FEL gain at the fundamental frequency and of 11% at the fifth harmonic has been calculated in the small gain per pass limit. Larger enhancements in gain may result from more favourable values of input parameters.

  6. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Science.gov (United States)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.

  7. Techno-economic optimisation of energy systems; Contribution a l'optimisation technico-economique de systemes energetiques

    Energy Technology Data Exchange (ETDEWEB)

    Mansilla Pellen, Ch

    2006-07-15

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  8. Optimising social information by game theory and ant colony method to enhance routing protocol in opportunistic networks

    Directory of Open Access Journals (Sweden)

    Chander Prabha

    2016-09-01

    Full Text Available The data loss and disconnection of nodes are frequent in the opportunistic networks. The social information plays an important role in reducing the data loss because it depends on the connectivity of nodes. The appropriate selection of next hop based on social information is critical for improving the performance of routing in opportunistic networks. The frequent disconnection problem is overcome by optimising the social information with Ant Colony Optimization method which depends on the topology of opportunistic network. The proposed protocol is examined thoroughly via analysis and simulation in order to assess their performance in comparison with other social based routing protocols in opportunistic network under various parameters settings.

  9. Synthesis of minimal-size ZnO nanoparticles through sol-gel method: Taguchi design optimisation

    OpenAIRE

    2015-01-01

    Zinc oxide (ZnO) has excellent potential to be used in water and wastewater treatment, either as a photocatalyst or in membrane incorporation. In this work, the synthesis of smaller ZnO NPs through a sol-gel approach was enhanced by applying Taguchi design. Recent work on the synthesis of ZnO NPs was optimised to ensure relatively smaller sized particles were obtained. Several parameters of the synthesis process, such as molar ratio of starting materials, molar concentration and calcination t...

  10. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi

    2012-09-04

    In this paper, shape optimisation of flapping wings in forward flight is considered. This analysis is performed by combining a local gradient-based optimizer with the unsteady vortex lattice method (UVLM). Although the UVLM applies only to incompressible, inviscid flows where the separation lines are known a priori, Persson et al. [1] showed through a detailed comparison between UVLM and higher-fidelity computational fluid dynamics methods for flapping flight that the UVLM schemes produce accurate results for attached flow cases and even remain trend-relevant in the presence of flow separation. As such, they recommended the use of an aerodynamic model based on UVLM to perform preliminary design studies of flapping wing vehicles Unlike standard computational fluid dynamics schemes, this method requires meshing of the wing surface only and not of the whole flow domain [2]. From the design or optimisation perspective taken in our work, it is fairly common (and sometimes entirely necessary, as a result of the excessive computational cost of the highest fidelity tools such as Navier-Stokes solvers) to rely upon such a moderate level of modelling fidelity to traverse the design space in an economical manner. The objective of the work, described in this paper, is to identify a set of optimised shapes that maximise the propulsive efficiency, defined as the ratio of the propulsive power over the aerodynamic power, under lift, thrust, and area constraints. The shape of the wings is modelled using B-splines, a technology used in the computer-aided design (CAD) field for decades. This basis can be used to smoothly discretize wing shapes with few degrees of freedom, referred to as control points. The locations of the control points constitute the design variables. The results suggest that changing the shape yields significant improvement in the performance of the flapping wings. The optimisation pushes the design to "bird-like" shapes with substantial increase in the time

  11. The specific uptake size index for quantifying radiopharmaceutical uptake

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, John S [Department of Medical Physics and Bioengineering, Southampton University Hospitals NHS Trust, Southampton (United Kingdom); Bolt, Livia [Department of Medical Physics and Bioengineering, Southampton University Hospitals NHS Trust, Southampton (United Kingdom); Stratford, Jennifer S [Department of Medical Physics and Bioengineering, Southampton University Hospitals NHS Trust, Southampton (United Kingdom); Kemp, Paul M [Department of Nuclear Medicine, Southampton University Hospitals NHS Trust, Southampton (United Kingdom)

    2004-07-21

    Quantitative indices of radionuclide uptake in an object of interest provide a useful adjunct to qualitative interpretation in the diagnostic application of radionuclide imaging. This note describes a new measure of total uptake of an organ, the specific uptake size index (SUSI). It can either be related in absolute terms to the total activity injected or to the specific activity in a reference region. As it depends on the total activity in the object, the value obtained will not depend on the resolution of the imaging process, as is the case with some other similar quantitative indices. This has been demonstrated in an experiment using simulated images. The application of the index to quantification of dopamine receptor SPECT imaging and parathyroid-thyroid subtraction planar scintigraphy is described. The index is considered to be of potential value in reducing variation in quantitative assessment of uptake in objects with applications in all areas of radionuclide imaging. (note)

  12. Cellular uptake of metallated cobalamins

    DEFF Research Database (Denmark)

    Tran, Mai Thanh Quynh; Stürup, Stefan; Lambert, Ian Henry

    2016-01-01

    Cellular uptake of vitamin B12-cisplatin conjugates was estimated via detection of their metal constituents (Co, Pt, and Re) by inductively coupled plasma mass spectrometry (ICP-MS). Vitamin B12 (cyano-cob(iii)alamin) and aquo-cob(iii)alamin [Cbl-OH2](+), which differ in the β-axial ligands (CN......(-) and H2O, respectively), were included as control samples. The results indicated that B12 derivatives delivered cisplatin to both cellular cytosol and nuclei with an efficiency of one third compared to the uptake of free cisplatin cis-[Pt(II)Cl2(NH3)2]. In addition, uptake of charged B12 derivatives...

  13. Optimisation of cultivation parameters in photobioreactors for microalgae cultivation using the A-stat technique

    NARCIS (Netherlands)

    Barbosa, M.J.; Hoogakker, J.; Wijffels, R.H.

    2003-01-01

    Light availability inside the reactor is often the bottleneck in microalgal cultivation and for this reason much attention is being given to light limited growth kinetics of microalgae, aiming at the increase of productivity in photobioreactors. Steady-state culture characteristics are commonly used

  14. Self-optimisation of admission control and handover parameters in LTE

    NARCIS (Netherlands)

    Sas, B.; Spaey, K.; Balan, I.; Zetterberg, K.; Litjens, R.

    2011-01-01

    In mobile cellular networks the handover (HO) algorithm is responsible for determining when calls of users that are moving from one cell to another are handed over from the former to the latter. The admission control (AC) algorithm, which is the algorithm that decides whether new (fresh or HO) calls

  15. Parameter Optimisation of Stress-strain Constitutive Equations Using Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    Y. Y. Yang; M. Mahfouf; D.A.Linkens

    2003-01-01

    The accuracy of numerical simulations and many other material design calculations, such as the rolling force, rollingtorque, etc., depends on the description of stress-strain relationship of the deformed materials. One common methodof describing the stres

  16. Software tools overview : process integration, modelling and optimisation for energy saving and pollution reduction

    OpenAIRE

    Lam, Hon Loong; Klemeš, Jiri; Kravanja, Zdravko; Varbanov, Petar

    2012-01-01

    This paper provides an overview of software tools based on long experience andapplications in the area of process integration, modelling and optimisation. The first part reviews the current design practice and the development of supporting software tools. Those are categorised as: (1) process integration and retrofit analysis tools, (2) general mathematical modelling suites with optimisation libraries, (3) flowsheeting simulation and (4) graph-based process optimisation tools. The second part...

  17. Stacking sequence optimisation of composite panels subjected to slamming impact loads using a genetic algorithm

    OpenAIRE

    Khedmati,Mohammad Reza; Sangtabi,Mohammad Rezai; Fakoori,Mehdi

    2013-01-01

    Optimisation of stacking sequence for composite panels under slamming impact loads using a genetic algorithm method is studied in this paper. For this purpose, slamming load is assumed to have a uniform distribution with a triangular-pulse type of intensity function. In order to perform optimisation based on a genetic algorithm, a special code is written in MATLAB software environment. The optimiser is coupled with the commercial software ANSYS in order to analyse the composite panel under st...

  18. Water uptake and water supply

    NARCIS (Netherlands)

    Sonneveld, C.; Voogt, W.

    2009-01-01

    The water uptake and the water supply do not directly affect the mineral absorption of plants. However, many connections exist between the management of minerals and water. The most evident of those connections are following

  19. Octreotide Uptake in Parathyroid Adenoma

    Science.gov (United States)

    Karaçavuş, Seyhan; Kula, Mustafa; Cihan Karaca, Züleyha; Ünlühızarcı, Kürşad; Tutuş, Ahmet; Bayram, Fahri; Çoban, Ganime

    2012-01-01

    The patient with a history of bone pain and muscle weakness, was thought to have oncogenic osteomalacia as a result of biochemical investigations and directed to Nuclear Medicine Department for a whole-body bone scintigraphy and 111In-octreotide scintigraphy. There was no focal pathologic tracer uptake, but generalized marked increase in skeletal uptake on bone scintigraphy. Octreotide scintigraphy showed accumulation of octreotide in the region of the left lobe of the thyroid gland in the neck. Thereafter, parathyroid scintigraphy was performed with technetium-99m labeled metroxy-isobutyl-isonitryl (99mTc-MIB) and MIBI scan demonstrated radiotracer uptake at the same location with octreotide scintigraphy. The patient underwent left inferior parathyroidectomy and histopathology confirmed a parathyroid adenoma. Somatostatin receptor positive parathyroid adenoma may show octreotide uptake. Octreotide scintigraphy may be promising and indicate a possibility of using somatostatin analogues for the medical treatment of somatostatin receptor positive Conflict of interest:None declared. PMID:23487397

  20. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    Science.gov (United States)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  1. An Inverse Robust Optimisation Approach for a Class of Vehicle Routing Problems under Uncertainty

    Directory of Open Access Journals (Sweden)

    Liang Sun

    2016-01-01

    Full Text Available There is a trade-off between the total penalty paid to customers (TPC and the total transportation cost (TTC in depot for vehicle routing problems under uncertainty (VRPU. The trade-off refers to the fact that the TTC in depot inevitably increases when the TPC decreases and vice versa. With respect to this issue, the vehicle routing problem (VRP with uncertain customer demand and travel time was studied to optimise the TPC and the TTC in depot. In addition, an inverse robust optimisation approach was proposed to solve this kind of VRPU by combining the ideas of inverse optimisation and robust optimisation so as to improve both the TPC and the TTC in depot. The method aimed to improve the corresponding TTC of the robust optimisation solution under the minimum TPC through minimising the adjustment of benchmark road transportation cost. According to the characteristics of the inverse robust optimisation model, a genetic algorithm (GA and column generation algorithm are combined to solve the problem. Moreover, 39 test problems are solved by using an inverse robust optimisation approach: the results show that both the TPC and TTC obtained by using the inverse robust optimisation approach are less than those calculated using a robust optimisation approach.

  2. Three-dimensional modelling and numerical optimisation of the W7-X ICRH antenna

    Energy Technology Data Exchange (ETDEWEB)

    Louche, F., E-mail: fabrice.louche@rma.ac.be [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium); Křivská, A.; Messiaen, A.; Ongena, J. [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium); Borsuk, V. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Juelich (Germany); Durodié, F.; Schweer, B. [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium)

    2015-10-15

    Highlights: • A simplified version of the ICRF antenna for the stellarator W7-X has been modelled with the 3D electromagnetic software Microwave Studio. This antenna can be tuned between 25 and 38 MHz with the help of adjustable capacitors. • In previous modellings the front of the antenna was modelled with the help of 3D codes, while the capacitors were modelled as lumped elements with a given DC capacitance. As this approach does not take into account the effect of the internal inductance, a MWS model of these capacitors has been developed. • The initial geometry does not permit the operation at 38 MHz. By modifying some geometrical parameters of the front face, it was possible to increase the frequency band of the antenna, and to increase (up to 25%) the maximum coupled power accounting for the technical constraints on the capacitors. • The W7-X ICRH antenna must be operated at 25 and 38 MHz, and for various toroidal phasings of the strap RF currents. Due to the considered duty cycle it is shown that thanks to a special procedure based on minimisation techniques, it is possible to define a satisfactory optimum geometry in agreement with the specifications of the capacitors. • The various steps of the optimisation are validated with TOPICA simulations. For a given density profile the RF power coupling expectancy can be precisely computed. - Abstract: Ion Cyclotron Resonance Heating (ICRH) is a promising heating and wall conditioning method considered for the W7-X stellarator and a dedicated ICRH antenna has been designed. This antenna must perform several tasks in a long term physics programme: fast particles generation, heating at high densities, current drive and ICRH physics studies. Various minority heating scenarios are considered and two frequency bands will be used. In the present work a design for the low frequency range (25–38 MHz) only is developed. The antenna is made of 2 straps with tap feeds and tuning capacitors with DC capacitance in

  3. Optimising the anaerobic co-digestion of urban organic waste using dynamic bioconversion mathematical modelling.

    Science.gov (United States)

    Fitamo, T; Boldrin, A; Dorini, G; Boe, K; Angelidaki, I; Scheutz, C

    2016-12-01

    Mathematical anaerobic bioconversion models are often used as a convenient way to simulate the conversion of organic materials to biogas. The aim of the study was to apply a mathematical model for simulating the anaerobic co-digestion of various types of urban organic waste, in order to develop strategies for controlling and optimising the co-digestion process. The model parameters were maintained in the same way as the original dynamic bioconversion model, albeit with minor adjustments, to simulate the co-digestion of food and garden waste with mixed sludge from a wastewater treatment plant in a continuously stirred tank reactor. The model's outputs were validated with experimental results obtained in thermophilic conditions, with mixed sludge as a single substrate and urban organic waste as a co-substrate at hydraulic retention times of 30, 20, 15 and 10 days. The predicted performance parameter (methane productivity and yield) and operational parameter (concentration of ammonia and volatile fatty acid) values were reasonable and displayed good correlation and accuracy. The model was later applied to identify optimal scenarios for an urban organic waste co-digestion process. The simulation scenario analysis demonstrated that increasing the amount of mixed sludge in the co-substrate had a marginal effect on the reactor performance. In contrast, increasing the amount of food waste and garden waste resulted in improved performance.

  4. Method development and validation for optimised separation of salicylic, acetyl salicylic and ascorbic acid in pharmaceutical formulations by hydrophilic interaction chromatography and response surface methodology.

    Science.gov (United States)

    Hatambeygi, Nader; Abedi, Ghazaleh; Talebi, Mohammad

    2011-09-02

    This paper introduces a design of experiments (DOE) approach for method optimisation in hydrophilic interaction chromatography (HILIC). An optimisation strategy for the separation of acetylsalicylic acid, its major impurity salicylic acid and ascorbic acid in pharmaceutical formulations by HILIC is presented, with the aid of response surface methodology (RSM) and Derringer's desirability function. A Box-Behnken experimental design was used to build the mathematical models and then to choose the significant parameters for the optimisation by simultaneously taking both resolution and retention time as the responses. The refined model had a satisfactory coefficient (R²>0.92, n=27). The four independent variables studied simultaneously were: acetonitrile content of the mobile phase, pH and concentration of buffer and column temperature each at three levels. Of these, the concentration of buffer and its cross-product with pH had a significant, positive influence on all studied responses. For the test compounds, the best separation conditions were: acetonitrile/22 mM ammonium acetate, pH 4.4 (82:18, v/v) as the mobile phase and column temperature of 28°C. The methodology also captured the interaction between variables which enabled exploration of the retention mechanism involved. It would be inferred that the retention is governed by a compromise between hydrophilic partitioning and ionic interaction. The optimised method was further validated according to the ICH guidelines with respect to linearity and range, precision, accuracy, specificity and sensitivity. The robustness of the method was also determined and confirmed by overlying counter plots of responses which were derived from the experimental design utilised for method optimisation.

  5. Optimisation and thermal control of a multi-layered structure for space electronic devices and thermal shielding of re-entry vehicles

    Science.gov (United States)

    Monti, Riccardo; Barboni, Renato; Gasbarri, Paolo; Chiwiacowsky, Leonardo D.

    2012-06-01

    All electronic devices, due to Joule effect, present heat dissipation, when they are electrically fed. The heat overstocking produces efficiency and performances reduction. On account of this the thermal control is mandatory. On small electronic equipments, the difficulty or impossibility of using a cooling fluid for the free or forced convection heat dissipation imposes the presence of cooling systems based on another kind of functioning principle such as the conduction. In this paper the thermal control, via pyroelectric materials, is presented. Furthermore, an optimisation of geometric, thermal and mechanical parameters, influencing the thermal dissipation, is studied and presented. Pyroelectric materials are able to convert heat into electrical charge spontaneously and, due to this capability, such materials could represent a suitable choice to increase the heat dissipation. The obtained electric charge or voltage could be used to charge a battery or to feed other equipments. In particular, a sequence of different materials such as Kovar®, molybdenum or copper-tungsten, used in a multi-layer pyroelectric wafer, together with their thicknesses, are design features to be optimised in order to have the optimal thermal dissipation. The optimisation process is performed by a hybrid approach where a genetic algorithm (GA) is used coupled with a local search procedure, in order to provide an appropriate balance between exploration and exploitation of the search space, which helps in the search for the optimal or quasi-optimal solution. Since the design variables used in the optimisation procedure are defined in different domains, discrete (e.g. the number of layers in the pyroelectric wafer) and continuous (e.g. the layers thickness) domains, the genetic representation for the solution should take it into account. The chromosome used in the genetic algorithm will mix both integer and real values, what will also be reflected in the genetic operators used in the

  6. An approach to patient dose optimisation in interventional radiology at the Clermont-Ferrand Hospital Centre; Demarche d'optimisation de la dosimetrie des patients en radiologie interventionnelle au CHU de Clermont-Ferrand

    Energy Technology Data Exchange (ETDEWEB)

    Guersen, Joel; Chabrot, Pascal; Cassagnes, Lucie; Gabrillargues, Jean; Boyer [Centre Hospitalier Universitaire - CHU, Clermont-Ferrand (France)

    2011-07-15

    In late October 2009, a serious event occurred in the imaging unit of the Clermont- Ferrand university hospital, corresponding to localised pruritic erythematous cutaneous lesions which resemble radiation-induced damage, following a double pelvic arterial embolization, which saved the life of a young female patient. The imaging unit and the General Management of the University Hospital notified ASN of the event and an on-site dosimetric appraisal carried out by IRSN confirmed that there was a very strong probability that the cutaneous symptoms were attributable to radiation. An internal inquiry concluded that there was a problem with optimisation of the machine parameters in the angiography facility concerned. The imaging unit then initiated a Patient dosimetry optimisation process for the 3 vascular radiology and vascular neuro-radiology facilities in the establishment, divided into 3 main phases dealing with: - image acquisition rates; - the high-voltage settings of the facility concerned, following notification of the event to AFSSAPS, implicating the manufacturer; - the radioscopy and radiography image acquisition parameters, following intervention by the IRSN experts at the request of the imaging unit. On the facility concerned, the reduction in the X-ray dose delivered to the patients was initially 30%, then 35% and finally 25%, representing a total reduction by a factor of three. (authors)

  7. Challenges of additive manufacturing technologies from an optimisation perspective

    Directory of Open Access Journals (Sweden)

    Guessasma Sofiane

    2015-01-01

    Full Text Available Three-dimensional printing offers varied possibilities of design that can be bridged to optimisation tools. In this review paper, a critical opinion on optimal design is delivered to show limits, benefits and ways of improvement in additive manufacturing. This review emphasises on design constrains related to additive manufacturing and differences that may appear between virtual and real design. These differences are explored based on 3D imaging techniques that are intended to show defect related processing. Guidelines of safe use of the term “optimal design” are derived based on 3D structural information.

  8. Review of magnesium hydride-based materials: development and optimisation

    Science.gov (United States)

    Crivello, J.-C.; Dam, B.; Denys, R. V.; Dornheim, M.; Grant, D. M.; Huot, J.; Jensen, T. R.; de Jongh, P.; Latroche, M.; Milanese, C.; Milčius, D.; Walker, G. S.; Webb, C. J.; Zlotea, C.; Yartys, V. A.

    2016-02-01

    Magnesium hydride has been studied extensively for applications as a hydrogen storage material owing to the favourable cost and high gravimetric and volumetric hydrogen densities. However, its high enthalpy of decomposition necessitates high working temperatures for hydrogen desorption while the slow rates for some processes such as hydrogen diffusion through the bulk create challenges for large-scale implementation. The present paper reviews fundamentals of the Mg-H system and looks at the recent advances in the optimisation of magnesium hydride as a hydrogen storage material through the use of catalytic additives, incorporation of defects and an understanding of the rate-limiting processes during absorption and desorption.

  9. Optimisation of biodiesel production by sunflower oil transesterification.

    Science.gov (United States)

    Antolín, G; Tinaut, F V; Briceño, Y; Castaño, V; Pérez, C; Ramírez, A I

    2002-06-01

    In this work the transformation process of sunflower oil in order to obtain biodiesel by means of transesterification was studied. Taguchi's methodology was chosen for the optimisation of the most important variables (temperature conditions, reactants proportion and methods of purification), with the purpose of obtaining a high quality biodiesel that fulfils the European pre-legislation with the maximum process yield. Finally, sunflower methyl esters were characterised to test their properties as fuels in diesel engines, such as viscosity, flash point, cold filter plugging point and acid value. Results showed that biodiesel obtained under the optimum conditions is an excellent substitute for fossil fuels.

  10. Word Sense Disambiguation using Optimised Combinations of Knowledge Sources

    CERN Document Server

    Wilks, Y A; Wilks, Yorick; Stevenson, Mark

    1998-01-01

    Word sense disambiguation algorithms, with few exceptions, have made use of only one lexical knowledge source. We describe a system which performs unrestricted word sense disambiguation (on all content words in free text) by combining different knowledge sources: semantic preferences, dictionary definitions and subject/domain codes along with part-of-speech tags. The usefulness of these sources is optimised by means of a learning algorithm. We also describe the creation of a new sense tagged corpus by combining existing resources. Tested accuracy of our approach on this corpus exceeds 92%, demonstrating the viability of all-word disambiguation rather than restricting oneself to a small sample.

  11. Optimising performance in steady state for a supermarket refrigeration system

    DEFF Research Database (Denmark)

    Green, Torben; Kinnaert, Michel; Razavi-Far, Roozbeh

    2012-01-01

    Using a supermarket refrigeration system as an illustrative example, the paper postulates that by appropriately utilising knowledge of plant operation, the plant wide performance can be optimised based on a small set of variables. Focusing on steady state operations, the total system performance...... is shown to predominantly be influenced by the suction pressure. Employing appropriate performance function leads to conclusions on the choice of set-point for the suction pressure that are contrary to the existing practice. Analysis of the resulting data leads to a simple method for finding optimal...

  12. Optimisation is at the heart of the operation.

    Science.gov (United States)

    Jones, Darren

    2013-11-01

    In our other article based around operating theatres in this issue of HEJ (see pages 64-72), we examine how some of the latest technology is benefiting users, but in this article--with all areas of the NHS charged with reducing energy consumption and cutting carbon emissions--Darren Jones, MD at carbon and energy management specialist, Low Carbon Europe, takes a detailed look, with the help of a 'real-life' case study based on recent experience at London's Heart Hospital, at operating theatre optimisation and HTM 03-01 audits.

  13. Optimised quantum hacking of superconducting nanowire single-photon detectors

    CERN Document Server

    Tanner, Michael G; Hadfield, Robert H

    2013-01-01

    We explore optimised control of superconducting nanowire single-photon detectors (SNSPDs) through bright illumination. We consider the behaviour of the SNSPD in the shunted configuration (a practical measure to avoid latching) in long-running quantum key distribution experiments. We propose and demonstrate an effective bright-light attack on this realistic configuration, by applying transient blinding illumination lasting for a fraction of a microsecond and producing several deterministic fake clicks during this time. We show that this attack does not lead to elevated timing jitter in the spoofed output pulse, and is hence not introducing significant errors. Five different SNSPD chip designs were tested. We consider possible countermeasures to this attack.

  14. Operation, optimisation, and performance of the DELPHI RICH detectors

    CERN Document Server

    Albrecht, E; Augustinus, A; Baillon, Paul; Battaglia, Marco; Bloch, D; Boudinov, E; Brunet, J M; Carrié, P; Cavalli, P; Christophel, E; Davenport, M; Dracos, M; Eklund, L; Erzen, B; Fischer, P A; Fokitis, E; Fontanelli, F; Gracco, Valerio; Hallgren, A; Joram, C; Juillot, P; Kjaer, N J; Kluit, P M; Lenzen, G; Liko, D; Mahon, J R; Maltezos, S; Markou, A; Neufeld, N; Nielsen, B S; Petrolini, A; Podobnik, T; Polok, G; Sajot, G; Sannino, M; Schyns, E; Strub, R; Tegenfeldt, F; Thadome, J; Tristram, G; Ullaland, O; Vulpen, I V

    1999-01-01

    The Ring Imaging Cherenkov detectors of DELPHI represent a large-scale particle identification system which covers almost the full angular acceptance of DELPHI. The combination of liquid and gas radiators (C sub 4 F sub 1 sub 0 , C sub 5 F sub 1 sub 2 , and C sub 6 F sub 1 sub 4) provides particle identification over the whole secondary particle momentum spectrum at LEP I and LEP II. Continuing optimisation on the hardware as well as on the online and offline software level have resulted in a stable operation of the complete detector system for more than five years at full physics performance.

  15. Optimising the Target and Capture Sections of the Neutrino Factory

    OpenAIRE

    Hansen, Ole Martin

    2016-01-01

    The Neutrino Factory is designed to produce an intense high energy neutrino beam from stored muons. The majority of the muons are obtained from the decay of pions, produced by a proton beam impinging on a free-flowing mercury-jet target and captured by a high magnetic field. It is important to capture a large fraction of the produced pions to maximize the intensity of the neutrino beam. Various optimisation studies have been performed with the aim of maximising the muon influx to the accel...

  16. Optimisation of battery operating life considering software tasks and their timing behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Lipskoch, Henrik

    2010-02-19

    Users of mobile embedded systems have an interest in long battery operating life. The longer a system can operate without need for recharge or battery replacement, the more will maintenance cost and the number of faults due to insufficient power supply decrease. Operating life is prolonged by saving energy, which may reduce available processing time. Mobile embedded systems communicating with other participants like other mobiles or radio stations are subject to time guarantees ensuring reliable communication. Thus, methods that save energy by reducing processing time are not only subject to available processing time but subject to the embedded system's time guarantees. To perform parameter optimisations offline, decisions can be taken early at design time, avoiding further computations at run-time. Especially, to compute processor shutdown durations offline, no extra circuitry to monitor system behaviour and to wake up the processor needs to be designed, deployed, or power supplied: only a timer is required. In this work, software tasks are considered sharing one processor. The scheduling algorithm earliest deadline first is assumed, and per-task, a relative deadline is assumed. Tasks may be instantiated arbitrarily as long as this occurrence behaviour is given in the notion of event streams. Scaling of the processor's voltage and processor shutdown are taken into account as methods for saving energy. With given per task worst-case execution times and the tasks' event streams, the real-time feasibility of the energy optimised solutions is proven. The decision which energy saving solution provides longest operating life is made with the help of a battery model. The used real-time feasibility test has the advantage that it can be approximated: this yields an adjustable number of linear optimisation constraints. Reducing the processor's voltage reduces processor frequency, therefore, execution times increase. The resulting slowdown becomes the

  17. Beyond Synthesis: Augmenting Systematic Review Procedures with Practical Principles to Optimise Impact and Uptake in Educational Policy and Practice

    Science.gov (United States)

    Green, Chris; Taylor, Celia; Buckley, Sharon; Hean, Sarah

    2016-01-01

    Whilst systematic reviews, meta-analyses and other forms of synthesis are considered amongst the most valuable forms of research evidence, their limited impact on educational policy and practice has been criticised. In this article, we analyse why systematic reviews do not benefit users of evidence more consistently and suggest how review teams…

  18. EVITEACH: a study exploring ways to optimise the uptake of evidence-based practice to undergraduate nurses.

    Science.gov (United States)

    Hickman, Louise D; Kelly, Helen; Phillips, Jane L

    2014-11-01

    EVITEACH aimed to increase undergraduate nursing student's engagement with evidence-based practice and enhance their knowledge utilisation and translation capabilities. Building students capabilities to apply evidence in professional practice is a fundamental university role. Undergraduate nursing students need to actively engage with knowledge utilisation and translational skill development to narrow the evidence practice gap in the clinical setting. A two phase mixed methods study was undertaken over a three year period (2008-2010, inclusive) utilizing a Plan-Do-Study-Act (PDSA) approach. Three undergraduate nursing cohorts (N = 188) enrolled in a compulsory knowledge translation and utilisation subject at one Australian university participated. Data collection comprised of subject evaluation data and reflective statements. Preliminary investigations identified priority areas related to subject: materials, resources, teaching and workload. These priority areas became the focus of action for two PDSA cycles. PDSA cycle 1 demonstrated significant improvement of the subject overall (p > 0.05), evaluation of the materials used (p > 0.001) and teaching sub-groups (p > 0.05). PDSA cycle 2 continued to sustain improvement of the subject overall (p > 0.05). Furthermore reflective statements collected during PDSA cycle 2 identified four themes: (1) What engages undergraduate nurses in the learning process; (2) The undergraduate nurses learning trajectory; (3) Undergraduate nurses' preconceptions of research and evidenced-based practice; and (4) Appreciating the importance of research and evidence-based practice to nursing. There is little robust evidence to guide the most effective way to build knowledge utilisation and translational skills. Effectively engaging undergraduate nursing students in knowledge translation and utilisation subjects could have immediate and long term benefits for nursing as a profession and patient outcomes. Developing evidence-based practice capabilities is important in terms of improving patient outcomes, organisational efficiencies and creating satisfying work environments.

  19. Optimising stocking rate and grazing management to enhance environmental and production outcomes for native temperate grasslands

    Science.gov (United States)

    Badgery, Warwick; Zhang, Yingjun; Huang, Ding; Broadfoot, Kim; Kemp, David; Mitchell, David

    2015-04-01

    Stocking rate and grazing management can be altered to enhance the sustainable production of grasslands but the relative influence of each has not often been determined for native temperate grasslands. Grazing management can range from seasonal rests through to intensive rotational grazing involving >30 paddocks. In large scale grazing, it can be difficult to segregate the influence of grazing pressure from the timing of utilisation. Moreover, relative grazing pressure can change between years as seasonal conditions influence grassland production compared to the relative constant requirements of animals. This paper reports on two studies in temperate native grasslands of northern China and south eastern Australia that examined stocking rate and regionally relevant grazing management strategies. In China, the grazing experiment involved combinations of a rest, moderate or heavy grazing pressure of sheep in spring, then moderate or heavy grazing in summer and autumn. Moderate grazing pressure at 50% of the current district average, resulted in the better balance between maintaining productive and diverse grasslands, a profitable livestock system, and mitigation of greenhouse gases through increased soil carbon, methane uptake by the soil, and efficient methane emissions per unit of weight gain. Spring rests best maintained a desirable grassland composition, but had few other benefits and reduced livestock productivity due to lower feed quality from grazing later in the season. In Australia, the grazing experiment compared continuous grazing to flexible 4- and 20-paddock rotational grazing systems with sheep. Stocking rates were adjusted between systems biannually based on the average herbage mass of the grassland. No treatment degraded the perennial pasture composition, but ground cover was maintained at higher levels in the 20-paddock system even though this treatment had a higher stocking rate. Overall there was little difference in livestock production (e.g. kg

  20. Uptake of mercury by thiol-grafted chitosan gel beads.

    Science.gov (United States)

    Merrifield, John D; Davids, William G; MacRae, Jean D; Amirbahman, Aria

    2004-07-01

    This study describes the synthesis and characterization of thiol-grafted chitosan beads for use as mercury (Hg) adsorbents. Chitosan flakes were dissolved and formed into spherical beads using a phase inversion technique, then crosslinked to improve their porosity and chemical stability. Cysteine was grafted onto the beads in order to improve the adsorption affinity of Hg to the beads. The beads possessed an average diameter of 3.2 mm, porosity of 0.9, specific surface area of approximately 100 m2/g, average pore size of approximately 120 angstroms, and specific gravity of 2.0. Equilibrium and kinetic uptake experiments were conducted to study the uptake of Hg by the beads. The adsorption capacity was approximately 8.0 mmol-Hg/g-dry beads at pH 7, and decreased with decreasing pH. Hg adsorption kinetics was modeled as radial pore diffusion into a spherical bead with nonlinear adsorption. Use of the nonlinear Freundlich isotherm in the diffusion equation allowed modeling of the uptake kinetics with a single tortuosity factor of 1.5 +/- 0.3 as the fitting parameter for all initial Hg concentrations, chitosan loadings, and agitation rates. At agitation rates of 50 and 75 rpm, where uptake rate was reduced significantly due to the boundary layer effect, the mass transfer coefficient at the outside boundary was also used as a fitting parameter to model the kinetic data. At agitation rates higher than 150 rpm, pore diffusion was the rate-limiting step. The beads exhibited a high initial uptake rate followed by a slower uptake rate suggesting pore diffusion as the rate-determining step especially at high agitation rates. Higher uptake rates observed in this study compared to those in a previous study of chitosan-based crab shells indicate that dissolution and gel formation increase the porosity and pore accessibility of chitosan.