WorldWideScience

Sample records for automated parameter optimization

  1. Automated Optimization of Walking Parameters for the Nao Humanoid Robot

    NARCIS (Netherlands)

    N. Girardi; C. Kooijman; A.J. Wiggers; A. Visser

    2013-01-01

    This paper describes a framework for optimizing walking parameters for a Nao humanoid robot. In this case an omnidirectional walk is learned. The parameters are learned in simulation with an evolutionary approach. The best performance was obtained for a combination of a low mutation rate and a high

  2. Automated Optimization of Walking Parameters for the Nao Humanoid Robot

    OpenAIRE

    Girardi, N.; Kooijman, C.; Wiggers, A.J.; de Visser, A.

    2013-01-01

    This paper describes a framework for optimizing walking parameters for a Nao humanoid robot. In this case an omnidirectional walk is learned. The parameters are learned in simulation with an evolutionary approach. The best performance was obtained for a combination of a low mutation rate and a high crossover rate.

  3. apsis - Framework for Automated Optimization of Machine Learning Hyper Parameters

    OpenAIRE

    Diehl, Frederik; Jauch, Andreas

    2015-01-01

    The apsis toolkit presented in this paper provides a flexible framework for hyperparameter optimization and includes both random search and a bayesian optimizer. It is implemented in Python and its architecture features adaptability to any desired machine learning code. It can easily be used with common Python ML frameworks such as scikit-learn. Published under the MIT License other researchers are heavily encouraged to check out the code, contribute or raise any suggestions. The code can be ...

  4. Automation of Optimized Gabor Filter Parameter Selection for Road Cracks Detection

    Directory of Open Access Journals (Sweden)

    Haris Ahmad Khan

    2016-03-01

    Full Text Available Automated systems for road crack detection are extremely important in road maintenance for vehicle safety and traveler’s comfort. Emerging cracks in roads need to be detected and accordingly repaired as early as possible to avoid further damage thus reducing rehabilitation cost. In this paper, a robust method for Gabor filter parameters optimization for automatic road crack detection is discussed. Gabor filter has been used in previous literature for similar applications. However, there is a need for automatic selection of optimized Gabor filter parameters due to variation in texture of roads and cracks. The problem of change of background, which in fact is road texture, is addressed through a learning process by using synthetic road crack generation for Gabor filter parameter tuning. Tuned parameters are then tested on real cracks and a thorough quantitative analysis is performed for performance evaluation.

  5. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    International Nuclear Information System (INIS)

    Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way

  6. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco;

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...... of two IGBT modules rated at 1.7 kV / 1 kA and 1.7 kV / 1.4kA....

  7. Optimization of hidrocyclone work parameters

    OpenAIRE

    Golomeova, Mirjana; Krstev, Boris; Golomeov, Blagoj

    2003-01-01

    The paper presents the procedure of optimization of laboratory hydrocyclone work by the application of dispersion analysis and planning with Greek-Latin square. The application of this method makes possible significant reduction of the number of tests and close optimization of the whole process. Tests were carried out by D-100 mm hydrocyclone. Optimization parameters are as follows: contents of solid in pulp, underflow diameter, overflow diameter and inlet pressure. The influence of optimi...

  8. Multivariate optimization of ILC parameters

    CERN Document Server

    Bazarov, Ivan V

    2005-01-01

    We present results of multiobjective optimization of the International Linear Collider (ILC) which seeks to maximize luminosity at each given total cost of the linac (capital and operating costs of cryomodules, refrigeration and RF). Evolutionary algorithms allow quick exploration of optimal sets of parameters in a complicated system such as ILC in the presence of realistic constraints as well as investigation of various what-if scenarios in potential performance. Among the parameters we varied there were accelerating gradient and Q of the cavities (in a coupled manner following a realistic Q vs. E curve), the number of particles per bunch, the bunch length, number of bunches in the train, etc. We find an optimum which decreases (relative to TDR baseline) the total linac cost by 22 %, capital cost by 25 % at the same luminosity of 3·1038

  9. Optimization-based Method for Automated Road Network Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, D

    2001-09-18

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.

  10. Parameter optimization in S-system models

    Directory of Open Access Journals (Sweden)

    Vasconcelos Ana

    2008-04-01

    Full Text Available Abstract Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well.

  11. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  12. Automated global optimization of commercial SAGD operations

    International Nuclear Information System (INIS)

    The economic optimization of steam assisted gravity drainage (SAGD) operations has been largely conducted through the use of simulations to identify optimal steam use approaches. In this study, the cumulative steam to oil ratio (CSOR) was optimized by altering the steam injection pressure throughout the evolution of the process in a detailed, 3-d reservoir model. A generic Athabasca simulation model was used along with a thermal reservoir simulator which used a corner point grid. A line heater was specified in the grid cells containing the well bores to mimic steam circulation. During heating, the injection and production locations were allowed to produce reservoir fluids from the reservoir to relieve pressure associated with the thermal expansion of oil sand. After steam circulation, the well bores were switched to an SAGD operation. At the producer well the operating constraint imposed a maximum temperature difference between the saturation temperature corresponding to the pressure of the fluids and the temperature in the wellbore equal to 5 degrees C. At the injection well, the steam injection pressure was specified according to the optimizer. A response surface was constructed by fitting the parameter sets and corresponding cost functions to a biquadratic function. After the minimum from the cost function was determined, a new set of parameters was selected to complete the iterations. Results indicated that optimization of SAGD is feasible with complex and detailed reservoir models by using parallel calculations. The general trend determined by the optimization algorithm developed in the research indicated that before the steam chamber contacts the overburden, the operating pressure should be relatively high. After contact is made, the injection pressure should be lowered to reduce heat losses. 17 refs., 1 tab., 5 figs

  13. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  14. Optimization based automated curation of metabolic reconstructions

    Directory of Open Access Journals (Sweden)

    Maranas Costas D

    2007-06-01

    Full Text Available Abstract Background Currently, there exists tens of different microbial and eukaryotic metabolic reconstructions (e.g., Escherichia coli, Saccharomyces cerevisiae, Bacillus subtilis with many more under development. All of these reconstructions are inherently incomplete with some functionalities missing due to the lack of experimental and/or homology information. A key challenge in the automated generation of genome-scale reconstructions is the elucidation of these gaps and the subsequent generation of hypotheses to bridge them. Results In this work, an optimization based procedure is proposed to identify and eliminate network gaps in these reconstructions. First we identify the metabolites in the metabolic network reconstruction which cannot be produced under any uptake conditions and subsequently we identify the reactions from a customized multi-organism database that restores the connectivity of these metabolites to the parent network using four mechanisms. This connectivity restoration is hypothesized to take place through four mechanisms: a reversing the directionality of one or more reactions in the existing model, b adding reaction from another organism to provide functionality absent in the existing model, c adding external transport mechanisms to allow for importation of metabolites in the existing model and d restore flow by adding intracellular transport reactions in multi-compartment models. We demonstrate this procedure for the genome- scale reconstruction of Escherichia coli and also Saccharomyces cerevisiae wherein compartmentalization of intra-cellular reactions results in a more complex topology of the metabolic network. We determine that about 10% of metabolites in E. coli and 30% of metabolites in S. cerevisiae cannot carry any flux. Interestingly, the dominant flow restoration mechanism is directionality reversals of existing reactions in the respective models. Conclusion We have proposed systematic methods to identify and

  15. Novel system for modulated lidar parameter optimization

    Institute of Scientific and Technical Information of China (English)

    Bo Zhou; Yong Ma; Kun Liang; Zhiqiang Tu; Hongyuan Wang

    2011-01-01

    We present a novel system for parameter design and optimization of modulated lidar. The system is realized by combining software simulation with hardware circuit. This method is more reliable and flexible for lidar parameter optimization compared with theoretical computation or fiber-simulated system. Experiments confirm that the system is capable of optimizing parameters for modulated lidar. Key parameters are analyzed as well. The optimal filter bandwidth is 200 MHz and the optimal modulation depth is 0.5 under typical application environment.%@@ We present a novel system for parameter design and optimization of modulated lidar.The system is realized by combining software simulation with hardware circuit.This method is more reliable and flexible for lidar parameter optimization compared with theoretical computation or fiber-simulated system.Experiments confirm that the system is capable of optimizing parameters for modulated lidar.Key parameters are analyzed as well.The optimal filter bandwidth is 200 MHz and the optimal modulation depth is 0.5 under typical application environment.

  16. DRAM BASED PARAMETER DATABASE OPTIMIZATION

    OpenAIRE

    Marcinkevicius, Tadas

    2012-01-01

    This thesis suggests an improved parameter database implementation for one of Ericsson products. The parameter database is used during the initialization of the system as well as during the later operation. The database size is constantly growing because the parameter database is intended to be used with different hardware configurations. When a new technology platform is released, multiple revisions with additional features and functionalities are later created, resulting in introduction of ...

  17. Evaluation of GCC optimization parameters

    Directory of Open Access Journals (Sweden)

    Rodrigo D. Escobar

    2012-12-01

    Full Text Available Compile-time optimization of code can result in significant performance gains. The amount of these gains varies widely depending upon the code being optimized, the hardware being compiled for, the specific performance increase attempted (e.g. speed, throughput, memory utilization, etc. and the used compiler. We used the latest version of the SPEC CPU 2006 benchmark suite to help gain an understanding of possible performance improvements using GCC (GNU Compiler Collection options focusing mainly on speed gains made possible by tuning the compiler with the standard compiler optimization levels as well as a specific compiler option for the hardware processor. We compared the best standardized tuning options obtained for a core i7 processor, to the same relative options used on a Pentium4 to determine whether the GNU project has improved its performance tuning capabilities for specific hardware over time.

  18. Parameters Optimization of Synergetic Recognition Approach

    Institute of Scientific and Technical Information of China (English)

    GAOJun; DONGHuoming; SHAOJing; ZHAOJing

    2005-01-01

    Synergetic pattern recognition is a novel and effective pattern recognition method, and has some advantages in image recognition. Researches have shown that attention parameters λ and parameters B, C directly influence on the recognition results, but there is no general research theory to control these parameters in the recognition process. We abstractly analyze these parameters in this paper, and purpose a novel parameters optimization method based on simulated annealing algorithm. SA algorithm has good optimization performance and is used to search the global optimized solution of these parameters. Theoretic analysis and experimental results both show that the proposed parameters optimization method is effective, which can fully improve the performance of synergetic recognition approach, and the algorithm realization is simple and fast.

  19. QUADRATIC OPTIMIZATION METHOD AND ITS APPLICATION ON OPTIMIZING MECHANISM PARAMETER

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yun; CHEN Jianneng; YU Yaxin; YU Gaohong; ZHU Jianping

    2006-01-01

    In order that the mechanism designed meets the requirements of kinematics with optimal dynamics behaviors, a quadratic optimization method is proposed based on the different characteristics of kinematic and dynamic optimization. This method includes two steps of optimization, that is, kinematic and dynamic optimization. Meanwhile, it uses the results of the kinematic optimization as the constraint equations of dynamic optimization. This method is used in the parameters optimization of transplanting mechanism with elliptic planetary gears of high-speed rice seedling transplanter with remarkable significance. The parameters spectrum, which meets to the kinematic requirements, is obtained through visualized human-computer interactions in the kinematics optimization, and the optimal parameters are obtained based on improved genetic algorithm in dynamic optimization. In the dynamic optimization, the objective function is chosen as the optimal dynamic behavior and the constraint equations are from the results of the kinematic optimization. This method is suitable for multi-objective optimization when both the kinematic and dynamic performances act as objective functions.

  20. Optimal Parameters Multicomponent Mixtures Extruding

    Directory of Open Access Journals (Sweden)

    Ramil F. Sagitov

    2013-01-01

    Full Text Available Experimental research of multicomponent mixtures extruding from production wastes are carried out, unit for production of composites from different types of waste is presented. Having analyzed dependence of multicomponent mixtures extruding energy requirements on die length and components content at three values of angular rate of screw rotation, we received the values of energy requirements at optimal length of the die, angular speed and percent of binding additives.

  1. Automated firewall analytics design, configuration and optimization

    CERN Document Server

    Al-Shaer, Ehab

    2014-01-01

    This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author

  2. Optimization of submerged vane parameters

    Indian Academy of Sciences (India)

    H SHARMA; B JAIN; Z AHMAD

    2016-03-01

    Submerged vanes are airfoils which are in general placed at certain angle with respect to the flow direction in a channel to induce artificial circulations downstream. By virtue of these artificially generated circulations, submerged vanes were utilized to protect banks of rivers against erosion, to control shifting of rivers, to avoid blocking of lateral intake with sediment deposition, etc. Odgaard and his associates have experimentally obtained the optimum vane sizes and recommended that it can be used for vane design. Thispaper is an attempt to review and validate the findings of Odgaard and his associates by utilizing computational fluid dynamics and experiments as a tool in which the vane generated vorticity in the downstream was maximized in order to obtain optimum vane parameters for single and multiple vane arrays.

  3. Controller Design Automation for Aeroservoelastic Design Optimization of Wind Turbines

    NARCIS (Netherlands)

    Ashuri, T.; Van Bussel, G.J.W.; Zaayer, M.B.; Van Kuik, G.A.M.

    2010-01-01

    The purpose of this paper is to integrate the controller design of wind turbines with structure and aerodynamic analysis and use the final product in the design optimization process (DOP) of wind turbines. To do that, the controller design is automated and integrated with an aeroelastic simulation t

  4. Weigh in Motion Based on Parameters Optimization

    Institute of Scientific and Technical Information of China (English)

    ZHOU Zhi-feng; CAI Ping; CHEN Ri-xing

    2009-01-01

    Dynamic tire forces are the main factor affecting the measurement accuracy of the axle weight of moving vehicle. This paper presents a novel method to reduce the influence of the dynamic tire forces on the weighing accuracy. On the basis of analyzing the characteristic of the dynamic tire forces, the objective optimization equation is constructed. The optimization algorithm is presented to get the optimal estimations of the objective parameters. According to the estimations of the parameters, the dynamic tire forces are separated from the axle weigh signal. The results of simulation and field experiments prove the effectiveness of the proposed method.

  5. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Automated inference procedure for the determination of cell growth parameters

    Science.gov (United States)

    Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.

    2016-01-01

    The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.

  7. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  8. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems. PMID:22122384

  9. An Optimization Model of Tunnel Support Parameters

    Directory of Open Access Journals (Sweden)

    Su Lijuan

    2015-05-01

    Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.

  10. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  11. A fully-automated software pipeline for integrating breast density and parenchymal texture analysis for digital mammograms: parameter optimization in a case-control breast cancer risk assessment study

    Science.gov (United States)

    Zheng, Yuanjie; Wang, Yan; Keller, Brad M.; Conant, Emily; Gee, James C.; Kontos, Despina

    2013-02-01

    Estimating a woman's risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.

  12. Automated process parameters tuning for an injection moulding machine with soft computing§

    Institute of Scientific and Technical Information of China (English)

    Peng ZHAO; Jian-zhong FU; Hua-min ZHOU; Shu-biao CUI

    2011-01-01

    In injection moulding production, the tuning of the process parameters is a challenging job, which relies heavily on the experience of skilled operators. In this paper, taking into consideration operator assessment during moulding trials, a novel intelligent model for automated tuning of process parameters is proposed. This consists of case based reasoning (CBR), empirical model (EM), and fuzzy logic (FL) methods. CBR and EM are used to imitate recall and intuitive thoughts of skilled operators,respectively, while FL is adopted to simulate the skilled operator optimization thoughts. First, CBR is used to set up the initial process parameters. If CBR fails, EM is employed to calculate the initial parameters. Next, a moulding trial is performed using the initial parameters. Then FL is adopted to optimize these parameters and correct defects repeatedly until the moulded part is found to be satisfactory. Based on the above methodologies, intelligent software was developed and embedded in the controller of an injection moulding machine. Experimental results show that the intelligent software can be effectively used in practical production, and it greatly reduces the dependence on the experience of the operators.

  13. Optimization of parameters of heat exchangers vehicles

    Directory of Open Access Journals (Sweden)

    Andrei MELEKHIN

    2014-09-01

    Full Text Available The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  14. SAE2.py : a python script to automate parameter studies using SCREAMER with application to magnetic switching on Z.

    Energy Technology Data Exchange (ETDEWEB)

    Orndorff-Plunkett, Franklin

    2011-05-01

    The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.

  15. SAE2.py: a python script to automate parameter studies using SCREAMER with application to magnetic switching on Z

    International Nuclear Information System (INIS)

    The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.

  16. An automated system for measuring parameters of nematode sinusoidal movement

    Directory of Open Access Journals (Sweden)

    Stirbl Robert C

    2005-02-01

    Full Text Available Abstract Background Nematode sinusoidal movement has been used as a phenotype in many studies of C. elegans development, behavior and physiology. A thorough understanding of the ways in which genes control these aspects of biology depends, in part, on the accuracy of phenotypic analysis. While worms that move poorly are relatively easy to describe, description of hyperactive movement and movement modulation presents more of a challenge. An enhanced capability to analyze all the complexities of nematode movement will thus help our understanding of how genes control behavior. Results We have developed a user-friendly system to analyze nematode movement in an automated and quantitative manner. In this system nematodes are automatically recognized and a computer-controlled microscope stage ensures that the nematode is kept within the camera field of view while video images from the camera are stored on videotape. In a second step, the images from the videotapes are processed to recognize the worm and to extract its changing position and posture over time. From this information, a variety of movement parameters are calculated. These parameters include the velocity of the worm's centroid, the velocity of the worm along its track, the extent and frequency of body bending, the amplitude and wavelength of the sinusoidal movement, and the propagation of the contraction wave along the body. The length of the worm is also determined and used to normalize the amplitude and wavelength measurements. To demonstrate the utility of this system, we report here a comparison of movement parameters for a small set of mutants affecting the Go/Gq mediated signaling network that controls acetylcholine release at the neuromuscular junction. The system allows comparison of distinct genotypes that affect movement similarly (activation of Gq-alpha versus loss of Go-alpha function, as well as of different mutant alleles at a single locus (null and dominant negative alleles

  17. Autonomous space systems control incorporating automated maneuvers strategies in the presence of parameters uncertainties.

    Science.gov (United States)

    Mazinan, A H; Shakhesi, S

    2016-05-01

    The research attempts to deal with the autonomous space systems incorporating new automated maneuvers strategies in the presence of parameters uncertainties. The main subject behind the investigation is to realize the high-resolution small amplitude orbital maneuvers via the first control strategy. And subsequently to realize the large amplitude orbital maneuvers via the second control strategy, as well. There is a trajectory optimization to provide the three-axis referenced commends for the aforementioned overactuated autonomous space system to be able to transfer from the initial orbit to its final ones, in finite burn, as long as the uncertainties of key parameters of the system such as the thrust vector, the center of the gravity, the moments of the inertia and so on are taken into real consideration. The strategies performances are finally considered through a series of experiments and a number of benchmarks to be tangibly verified. PMID:26895709

  18. Particle Swarm Optimization for Calibrating and Optimizing Xinanjiang Model Parameters

    Directory of Open Access Journals (Sweden)

    Kuok King Kuok

    2012-09-01

    Full Text Available The Xinanjiang model, a conceptual hydrological model is well known and widely used in China since 1970s. Therefore, most of the parameters in Xinanjiang model have been calibrated and pre-set according to different climate, dryness, wetness, humidity, topography for various catchment areas in China. However, Xinanjiang model is not applied in Malaysia yet and the optimal parameters are not known. The calibration of Xinanjiang model parameters through trial and error method required much time and effort to obtain better results. Therefore, Particle Swarm Optimization (PSO is adopted to calibrate Xinanjiang model parameters automatically. In this paper, PSO algorithm is used to find the best set of parameters for both daily and hourly models. The selected study area is Bedup Basin, located at Samarahan Division, Sarawak, Malaysia. For daily model, input data used for model calibration was daily rainfall data Year 2001, and validated with data Year 1990, 1992, 2000, 2002 and 2003. A single storm event dated 9th to 12thOctober 2003 was used to calibrate hourly model and validated with 12 different storm events. The accuracy of the simulation results are measured using Coefficient of Correlation (R and Nash-Sutcliffe Coefficient (E2. Results show that PSO is able to optimize the 12 parameters of Xinanjiang model accurately. For daily model, the best R and E2 for model calibration are found to be 0.775 and 0.715 respectively, and average R=0.622 and E2=0.579 for validation set. Meanwhile, R=0.859 and E2=0.892 are yielded when calibrating hourly model, and the average R and E2 obtained are 0.705 and 0.647 respectively for validation set.

  19. Video Superresolution via Parameter-Optimized Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yunyi Yan

    2014-01-01

    Full Text Available Video superresolution (VSR aims to reconstruct a high-resolution video sequence from a low-resolution sequence. We propose a novel particle swarm optimization algorithm named as parameter-optimized multiple swarms PSO (POMS-PSO. We assessed the optimization performance of POMS-PSO by four standard benchmark functions. To reconstruct high-resolution video, we build an imaging degradation model. In view of optimization, VSR is converted to an optimization computation problem. And we take POMS-PSO as an optimization method to solve the VSR problem, which overcomes the poor effect, low accuracy, and large calculation cost in other VSR algorithms. The proposed VSR method does not require exact movement estimation and does not need the computation of movement vectors. In terms of peak signal-to-noise ratio (PSNR, sharpness, and entropy, the proposed VSR method based POMS-PSO showed better objective performance. Besides objective standard, experimental results also proved the proposed method could reconstruct high-resolution video sequence with better subjective quality.

  20. Optimal deadlock avoidance Petri net supervisors for automated manufacturing systems

    Institute of Scientific and Technical Information of China (English)

    Keyi XING; Feng TIAN; Xiaojun YANG

    2007-01-01

    Deadlock avoidance problems are investigated for automated manufacturing systems with flexible routings.Based on the Petri net models of the systems, this paper proposes, for the first time, the concept of perfect maximal resourcetransition circuits and their saturated states. The concept facilitates the development of system liveness characterization and deadlock avoidance Petri net supervisors. Deadlock is characterized as some perfect maximal resource-transition circuits reaching their saturated states. For a large class of manufacturing systems, which do not contain center resources, the optimal deadlock avoidance Petri net supervisors are presented. For a general manufacturing system, a method is proposed for reducing the system Petri net model so that the reduced model does not contain center resources and, hence, has optimal deadlock avoidance Petri net supervisor. The controlled reduced Petri net model can then be used as the liveness supervisor of the system.

  1. Optimization of audio - ultrasonic plasma system parameters

    Science.gov (United States)

    Haleem, N. A.; Abdelrahman, M. M.; Ragheb, M. S.

    2016-10-01

    The present plasma is a special glow plasma type generated by an audio ultrasonic discharge voltage. A definite discharge frequency using a gas at a narrow band pressure creates and stabilizes this plasma type. The plasma cell is a self-extracted ion beam; it is featured with its high output intensity and its small size. The influence of the plasma column length on the output beam due to the variation of both the audio discharge frequency and the power applied to the plasma electrodes is investigated. In consequence, the aim of the present work is to put in evidence the parameters that influence the self-extracted collected ion beam and to optimize the conditions that enhance the collected ion beam. The experimental parameters studied are the nitrogen gas, the applied frequency from 10 to 100 kHz, the plasma length that varies from 8 to 14 cm, at a gas pressure of ≈ 0.25 Torr and finally the discharge power from 50 to 500 Watt. A sheet of polyethylene of 5 micrometer covers the collector electrode in order to confirm how much ions from the beam can go through the polymer and reach the collector. To diagnose the occurring events of the beam on the collector, the polymer used is analyzed by means of the FTIR and the XRF techniques. Optimization of the plasma cell parameters succeeded to enhance and to identify the parameters that influence the output ion beam and proved that its particles attaining the collector are multi-energetic.

  2. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  3. The optimization of total laboratory automation by simulation of a pull-strategy.

    Science.gov (United States)

    Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo

    2015-01-01

    Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.

  4. GA based CNC turning center exploitation process parameters optimization

    Directory of Open Access Journals (Sweden)

    Z. Car

    2009-01-01

    Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.

  5. Parameter optimization model in electrical discharge machining process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.

  6. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  7. Optimizing wireless LAN for longwall coal mine automation

    Energy Technology Data Exchange (ETDEWEB)

    Hargrave, C.O.; Ralston, J.C.; Hainsworth, D.W. [Exploration & Mining Commonwealth Science & Industrial Research Organisation, Pullenvale, Qld. (Australia)

    2007-01-15

    A significant development in underground longwall coal mining automation has been achieved with the successful implementation of wireless LAN (WLAN) technology for communication on a longwall shearer. WIreless-FIdelity (Wi-Fi) was selected to meet the bandwidth requirements of the underground data network, and several configurations were installed on operating longwalls to evaluate their performance. Although these efforts demonstrated the feasibility of using WLAN technology in longwall operation, it was clear that new research and development was required in order to establish optimal full-face coverage. By undertaking an accurate characterization of the target environment, it has been possible to achieve great improvements in WLAN performance over a nominal Wi-Fi installation. This paper discusses the impact of Fresnel zone obstructions and multipath effects on radio frequency propagation and reports an optimal antenna and system configuration. Many of the lessons learned in the longwall case are immediately applicable to other underground mining operations, particularly wherever there is a high degree of obstruction from mining equipment.

  8. Automated Modal Parameter Estimation of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice;

    In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...

  9. MOS PARAMETER EXTRACTION AND OPTIMIZATION WITH GENETIC ALGORITHM

    OpenAIRE

    BAŞAK, M.Emin; KUNTMAN, Ayten; Kuntman, Hakan

    2010-01-01

    Extracting an optimal set of parameter values for a MOS device is great importance in contemporary technology is acomplex problem. Traditional methods of parameter extraction can produce far from optimal solutions because of thepresence of local optimum in the solution space. Genetic algorithms are well suited for finding near optimal solutions inirregular parameter spaces.In this study*, We have applied a genetic algorithm to the problem of device model parameter extraction and are able topr...

  10. Automated Portfolio Optimization Based on a New Test for Structural Breaks

    Directory of Open Access Journals (Sweden)

    Tobias Berens

    2014-04-01

    Full Text Available We present a completely automated optimization strategy which combines the classical Markowitz mean-variance portfolio theory with a recently proposed test for structural breaks in covariance matrices. With respect to equity portfolios, global minimum-variance optimizations, which base solely on the covariance matrix, yield considerable results in previous studies. However, financial assets cannot be assumed to have a constant covariance matrix over longer periods of time. Hence, we estimate the covariance matrix of the assets by respecting potential change points. The resulting approach resolves the issue of determining a sample for parameter estimation. Moreover, we investigate if this approach is also appropriate for timing the reoptimizations. Finally, we apply the approach to two datasets and compare the results to relevant benchmark techniques by means of an out-of-sample study. It is shown that the new approach outperforms equally weighted portfolios and plain minimum-variance portfolios on average.

  11. Coarse Estimation of Physical Parameters of Eclipsing Binaries by Means of Automative Scripting

    CERN Document Server

    Prsa, A

    2003-01-01

    Because of GAIA's estimated harvest of cca. 100.000 eclipsing binaries (Munari et al. 2001) automative procedures for extracting physical parameters from observations must be introduced. We present the preliminary results of automative scripting applied to 5 eclipsing binaries, for which photometric and radial velocity observations were taken from literature. Although the results are encouraging, extensive testing on a wider sample has to be performed.

  12. A Discrete Particle Swarm Optimization to Estimate Parameters in Vision Tasks

    Directory of Open Access Journals (Sweden)

    Benchikhi Loubna

    2016-01-01

    Full Text Available The majority of manufacturers demand increasingly powerful vision systems for quality control. To have good outcomes, the installation requires an effort in the vision system tuning, for both hardware and software. As time and accuracy are important, actors are oriented to automate parameter’s adjustment optimization at least in image processing. This paper suggests an approach based on discrete particle swarm optimization (DPSO that automates software setting and provides optimal parameters for industrial vision applications. A novel update functions for our DPSO definition are suggested. The proposed method is applied on some real examples of quality control to validate its feasibility and efficiency, which shows that the new DPSO model furnishes promising results.

  13. Architecture of Automated Database Tuning Using SGA Parameters

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2012-05-01

    Full Text Available Business Data always growth from kilo byte, mega byte, giga byte, tera byte, peta byte, and so far. There is no way to avoid this increasing rate of data till business still running. Because of this issue, database tuning be critical part of a information system. Tuning a database in a cost-effective manner is a growing challenge. The total cost of ownership (TCO of information technology needs to be significantly reduced by minimizing people costs. In fact, mistakes in operations and administration of information systems are the single most reasons for system outage and unacceptable performance [3]. One way of addressing the challenge of total cost of ownership is by making information systems more self-managing. A particularly difficult piece of the ambitious vision of making database systems self-managing is the automation of database performance tuning. In this paper, we will explain the progress made thus far on this important problem. Specifically, we will propose the architecture and Algorithm for this problem.

  14. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  15. Optimalization of selected RFID systems Parameters

    OpenAIRE

    Peter Vestenicky

    2004-01-01

    This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  16. A critical analysis of parameter adaptation in ant colony optimization

    OpenAIRE

    PELLEGRINI, Paola; Stützle, Thomas; Birattari, Mauro

    2012-01-01

    Applying parameter adaptation means operating on parameters of an algorithm while it is tackling an instance. For ant colony optimization, several parameter adaptation methods have been proposed. In the literature, these methods have been shown to improve the quality of the results achieved in some particular contexts. In particular, they proved to be successful when applied to novel ant colony optimization algorithms for tackling problems that are not a classical testbed for optimization alg...

  17. The structure of optimal parameters for image restoration problems

    OpenAIRE

    de los Reyes, J. C.; Sch?nlieb, C. B.; Valkonen, T.

    2015-01-01

    We study the qualitative properties of optimal regularisation parameters in variational models for image restoration. The parameters are solutions of bilevel optimisation problems with the image restoration problem as constraint. A general type of regulariser is considered, which encompasses total variation (TV), total generalized variation (TGV) and infimal-convolution total variation (ICTV). We prove that under certain conditions on the given data optimal parameters derived by bilevel optim...

  18. Optimal parameters uncoupling vibration modes of oscillators

    CERN Document Server

    Le, Khanh Chau

    2016-01-01

    A novel optimization concept for an oscillator with two degrees of freedom is proposed. By using specially defined motion ratios, we control the action of springs and dampers to each degree of freedom of the oscillator. If the potential action of the springs in one period of vibration, used as the payoff function for the conservative oscillator, is maximized, then the optimal motion ratios uncouple vibration modes. The same result holds true for the dissipative oscillator. The application to optimal design of vehicle suspension is discussed.

  19. A New Approach for Parameter Optimization in Land Surface Model

    Institute of Scientific and Technical Information of China (English)

    LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin

    2011-01-01

    In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.

  20. Parameter Optimization Based on GA and HFSS

    Institute of Scientific and Technical Information of China (English)

    SUN Shu-hui; WANG Bing-zhong

    2005-01-01

    A new project based on genetic algorithm (GA) and high frequency simulation software (HFSS) is proposed to optimize microwave passive devices effectively. This project is realized with a general program named as optimization program. The program is compiled by Matlab and the macro language of HFSS which is a fast and effective way to accomplish tasks. In the paper, two examples are used to show the project's feasibility.

  1. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area Az under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost surface

  2. An Automated Tool for Optimization of FMS Scheduling With Meta Heuristic Approach

    Directory of Open Access Journals (Sweden)

    A. V. S. Sreedhar Kumar

    2014-03-01

    Full Text Available The evolutions of manufacturing systems have reflected the need and requirement of the market which varies from time to time. Flexible manufacturing systems have contributed a lot to the development of efficient manufacturing process and production of variety of customized limited volume products as per the market demand based on customer needs. Scheduling of FMS is a crucial operation in maximizing throughput, reducing the wastages and increasing the overall efficiency of the manufacturing process. The dynamic nature of the Flexible Manufacturing Systems makes them unique and hence a generalized solution for scheduling is difficult to be abstracted. Any Solution for optimizing the scheduling should take in to account a multitude of parameters before proposing any solution. The primary objective of the proposed research is to design a tool to automate the optimization of scheduling process by searching for solution in the search spaces using Meta heuristic approaches. The research also validates the use of reward as means for optimizing the scheduling by including it as one of the parameters in the Combined Objective Function.

  3. The reliability parameters definition in radioelectronic devices automated designing systems

    Directory of Open Access Journals (Sweden)

    Yu. F. Zinkovskiy

    2012-11-01

    Full Text Available The reliability parameters calculating problems for radioelectronic devices determined by thermal modes are considered. It is shown that such calculations should be based on temperature definition methods for separate components of radio engineering device (RED electronic structure. The thermal modes calculating methods for electronic blocks, cells, microassemblies are considered. The analytical models may be used for the average temperatures of cells in the block; the heat exchange equations system is proposed for radio component temperature estimation on the cell plate; the analytical solution is offered for microassembly temperature estimation. The analytical mathematical models for reliability indexes calculations of radio components and whole RED are determined.

  4. The Robustness Optimization of Parameter Estimation in Chaotic Control Systems

    Directory of Open Access Journals (Sweden)

    Zhen Xu

    2014-10-01

    Full Text Available Standard particle swarm optimization algorithm has problems of bad adaption and weak robustness in the parameter estimation model of chaotic control systems. In light of this situation, this paper puts forward a new estimation model based on improved particle swarm optimization algorithm. It firstly constrains the search space of the population with Tent and Logistic double mapping to regulate the initialized population size, optimizes the fitness value by evolutionary state identification strategy so as to avoid its premature convergence, optimizes the inertia weight by the nonlinear decrease strategy to reach better global and local optimal solution, and then optimizes the iteration of particle swarm optimization algorithm with the hybridization concept from genetic algorithm. Finally, this paper applies it into the parameter estimation of chaotic systems control. Simulation results show that the proposed parameter estimation model shows higher accuracy, anti-noise ability and robustness compared with the model based on standard particle swarm optimization algorithm.

  5. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  6. Optimal control of nonsmooth distributed parameter systems

    CERN Document Server

    Tiba, Dan

    1990-01-01

    The book is devoted to the study of distributed control problems governed by various nonsmooth state systems. The main questions investigated include: existence of optimal pairs, first order optimality conditions, state-constrained systems, approximation and discretization, bang-bang and regularity properties for optimal control. In order to give the reader a better overview of the domain, several sections deal with topics that do not enter directly into the announced subject: boundary control, delay differential equations. In a subject still actively developing, the methods can be more important than the results and these include: adapted penalization techniques, the singular control systems approach, the variational inequality method, the Ekeland variational principle. Some prerequisites relating to convex analysis, nonlinear operators and partial differential equations are collected in the first chapter or are supplied appropriately in the text. The monograph is intended for graduate students and for resea...

  7. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    Science.gov (United States)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  8. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    Science.gov (United States)

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  9. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    Science.gov (United States)

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  10. Optimization of the main parameters of the subsoil irrigation systems

    OpenAIRE

    Elena Akytneva; Askar Akhmedov

    2014-01-01

    This article discusses the issues of optimization of the basic parameters of soil irrigation systems with application of the plan of Regardera second order. The obtained optimal parameters of soil irrigation systems that can be used for designing and construction of this method of irrigation.

  11. Parameters optimization on DHSVM model based on a genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Changqing YAO; Zhifeng YANG

    2009-01-01

    Due to the multiplicity of factors including weather, the underlying surface and human activities, the complexity of parameter optimization for a distributed hydrological model of a watershed land surface goes far beyond the capability of traditional optimization methods. The genetic algorithm is a new attempt to find a solution to this problem. A genetic algorithm design on the Distributed-Hydrology-Soil-Vegetation model (DHSVM) parameter optimization is illustrated in this paper by defining the encoding method, designing the fitness value function, devising the genetic operators, selecting the arithmetic parameters and identifying the arithmetic termination conditions. Finally, a case study of the optimization method is implemented on the Lushi Watershed of the Yellow River Basin and achieves satisfactory results of parameter estimation. The result shows that the genetic algorithm is feasible in optimizing parameters of the DHSVM model.

  12. Parameters Investigation of Mathematical Model of Productivity for Automated Line with Availability by DMAIC Methodology

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-01-01

    Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.

  13. Modeling and performance optimization of automated antenna alignment for telecommunication transceivers

    Directory of Open Access Journals (Sweden)

    Md. Ahsanul Hoque

    2015-09-01

    Full Text Available Antenna alignment is very cumbersome in telecommunication industry and it especially affects the MW links due to environmental anomalies or physical degradation over a period of time. While in recent years a more conventional approach of redundancy has been employed but to ensure the LOS link stability, novel automation techniques are needed. The basic principle is to capture the desired Received Signal Level (RSL by means of an outdoor unit installed on tower top and analyzing the RSL in indoor unit by means of a GUI interface. We have proposed a new smart antenna system where automation is initiated when the transceivers receive low signal strength and report the finding to processing comparator unit. Series architecture is used that include loop antenna, RCX Robonics, LabVIEW interface coupled with a tunable external controller. Denavit–Hartenberg parameters are used in analytical modeling and numerous control techniques have been investigated to overcome imminent overshoot problems for the transport link. With this novel approach, a solution has been put forward for the communication industry where any antenna could achieve optimal directivity for desired RSL with low overshoot and fast steady state response.

  14. Review of Automated Design and Optimization of MEMS

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca

    2007-01-01

    In recent years MEMS saw a very rapid development. Although many advances have been reached, due to the multiphysics nature of MEMS, their design is still a difficult task carried on mainly by hand calculation. In order to help to overtake such difficulties, attempts to automate MEMS design were...... carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated....

  15. Collective Tuning Initiative: automating and accelerating development and optimization of computing systems

    OpenAIRE

    Fursin, Grigori

    2009-01-01

    International audience Computing systems rarely deliver best possible performance due to ever increasing hardware and software complexity and limitations of the current optimization technology. Additional code and architecture optimizations are often required to improve execution time, size, power consumption, reliability and other important characteristics of computing systems. However, it is often a tedious, repetitive, isolated and time consuming process. In order to automate, simplify ...

  16. Toward an Integrated Framework for Automated Development and Optimization of Online Advertising Campaigns

    OpenAIRE

    Thomaidou, Stamatina; Vazirgiannis, Michalis; Liakopoulos, Kyriakos

    2012-01-01

    Creating and monitoring competitive and cost-effective pay-per-click advertisement campaigns through the web-search channel is a resource demanding task in terms of expertise and effort. Assisting or even automating the work of an advertising specialist will have an unrivaled commercial value. In this paper we propose a methodology, an architecture, and a fully functional framework for semi- and fully- automated creation, monitoring, and optimization of cost-efficient pay-per-click campaigns ...

  17. OPTIMIZATION OF PARAMETERS OF ELEMENTS COMPUTER SYSTEM

    Directory of Open Access Journals (Sweden)

    Nesterov G. D.

    2016-03-01

    Full Text Available The work is devoted to the topical issue of increasing the productivity of computers. It has an experimental character. Therefore, the description of a number of the carried-out tests and the analysis of their results is offered. Previously basic characteristics of modules of the computer for the regular mode of functioning are provided in the article. Further the technique of regulating their parameters in the course of experiment is described. Thus the special attention is paid to observing the necessary thermal mode in order to avoid an undesirable overheat of the central processor. Also, operability of system in the conditions of the increased energy consumption is checked. The most responsible moment thus is regulating the central processor. As a result of the test its optimum tension, frequency and delays of data reading from memory are found. The analysis of stability of characteristics of the RAM, in particular, a condition of its tires in the course of experiment is made. As the executed tests took place within the standard range of characteristics of modules, and, therefore, the margin of safety put in the computer and capacity of system wasn't used, further experiments were made at extreme dispersal in the conditions of air cooling. The received results are also given in the offered article

  18. Optimizing parameters for magnetorheological finishing supersmooth surface

    Science.gov (United States)

    Cheng, Haobo; Feng, Zhijing; Wang, Yingwei

    2005-02-01

    This paper presents a reasonable approach to this issue, i.e., computer controlled magnetorheological finishing (MRF). In MRF, magnetically stiffened magnetorheological (MR) abrasive fluid flows through a preset converging gap that is formed by a workpiece surface and a moving rigid wall, to create precise material removal and polishing. Tsinghua University recently completed a project with MRF technology, in which a 66 mm diameter, f/5 parabolic mirror was polished to the shape accuracy of λ/17 RMS (λ=632.8nm) and the surface roughness of 1.22 nm Ra. This was done on a home made novel aspheric computer controlled manufacturing system. It is a three-axis, self-rotating wheel machine, the polishing tool is driven with one motor through a belt. This paper presents the manufacturing and testing processes, including establish the mathematics model of MRF optics on the basis of Preston equation, profiler test and relative coefficients, i.e., pressure between workpiece and tool, velocity of MR fluid in polishing spot, tolerance control of geometrical parameters such as radius of curvature and conic constant also been analyzed in the paper. Experiments were carried out on the features of MRF. The results indicated that the required convergent speed, surface roughness could be achieved with high efficiency.

  19. Structural Parameter Optimization of Multilayer Conductors in HTS Cable

    Institute of Scientific and Technical Information of China (English)

    Yan Mao; Jie Qiu; Xin-Ying Liu; Zhi-Xuan Wang; Shu-Hong Wang; Jian-Guo Zhu; You-Guang Guo; Zhi-Wei Lin; Jian-Xun Jin

    2008-01-01

    In this paper, the design optimization of the structural parameters of multilayer conductors in high temperature superconducting (HTS) cable is reviewed. Various optimization methods, such as the particle swarm optimization (PSO), the genetic algorithm (GA), and a robust optimization method based on design for six sigma (DFSS), have been applied to realize uniform current distribution among the multi- layer HTS conductors. The continuous and discrete variables, such as the winding angle, radius, and winding direction of each layer, are chosen as the design parameters. Under the constraints of the mechanical properties and critical current, PSO is proven to be a more powerful tool than GA for structural parameter optimization, and DFSS can not only achieve a uniform current distribution, but also improve significantly the reliability and robustness of the HTS cable quality.

  20. Optimal z-axis scanning parameters for gynecologic cytology specimens

    OpenAIRE

    Amber D Donnelly; Mukherjee, Maheswari S.; Lyden, Elizabeth R.; Bridge, Julia A.; Subodh M Lele; Najia Wright; Mary F McGaughey; Culberson, Alicia M.; Adam J. Horn; Whitney R Wedel; Stanley J Radio

    2013-01-01

    Background: The use of virtual microscopy (VM) in clinical cytology has been limited due to the inability to focus through three dimensional (3D) cell clusters with a single focal plane (2D images). Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN) specimens prepared on SurePath™ glass slides while m...

  1. Integral Optimization of Systematic Parameters of Flip-Flow Screens

    Institute of Scientific and Technical Information of China (English)

    翟宏新

    2004-01-01

    The synthetic index Ks for evaluating flip-flow screens is proposed and systematically optimized in view of the whole system. A series of optimized values of relevant parameters are found and then compared with those of the current industrial specifications. The results show that the optimized value Ks approaches the one of those famous flip-flow screens in the world. Some new findings on geometric and kinematics parameters are useful for improving the flip-flow screens with a low Ks value, which is helpful in developing clean coal technology.

  2. Genetic Algorithm Optimizes Q-LAW Control Parameters

    Science.gov (United States)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  3. Screening of optimization parameters for mixing process via CFD

    International Nuclear Information System (INIS)

    In this study, the numerical simulation in a mixing vessel agitated by a six bladed Rushton turbine has been carried out to investigate the effects of effective parameters to the mixing process. The study is intended to screen the potential parameters which affect the optimization process and to provide the detail insights into the process. Three-dimensional and steady-state flow has been performed using the fully predictive Multiple Reference Frame (MRF) technique for the impeller and tank geometry. Process optimization is always used to ensure the optimum conditions are fulfilled to attain industries satisfaction or needs (for example; increase profit, low cost, yields, others). In this study, the range of recommended speed to accelerate optimization is 100, 150 and 200 rpm respectively and the range of recommended clearance is 50, 75 and 100 mm respectively for dual Rushton impeller. Thus, the computer fluid dynamics (CFD) was introduced in order to screen the suitable parameters efficiently and to accelerate optimization. (Author)

  4. Optimizing hadoop parameter settings with gene expression programming guided PSO

    OpenAIRE

    Huang, Z; Li, M; Taylor, GA; Khan, M

    2016-01-01

    Hadoop MapReduce has become a major computing technology in support of big data analytics. The Hadoop framework has over 190 configuration parameters, and some of them can have a significant effect on the performance of a Hadoop job. Manually tuning the optimum or near optimum values of these parameters is a challenging task and also a time consuming process. This paper optimizes the performance of Hadoop by automatically tuning its configuration parameter settings. The proposed work first em...

  5. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.;

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  6. Genetic algorithm parameter optimization: applied to sensor coverage

    Science.gov (United States)

    Sahin, Ferat; Abbate, Giuseppe

    2004-08-01

    Genetic Algorithms are powerful tools, which when set upon a solution space will search for the optimal answer. These algorithms though have some associated problems, which are inherent to the method such as pre-mature convergence and lack of population diversity. These problems can be controlled with changes to certain parameters such as crossover, selection, and mutation. This paper attempts to tackle these problems in GA by having another GA controlling these parameters. The values for crossover parameter are: one point, two point, and uniform. The values for selection parameters are: best, worst, roulette wheel, inside 50%, outside 50%. The values for the mutation parameter are: random and swap. The system will include a control GA whose population will consist of different parameters settings. While this GA is attempting to find the best parameters it will be advancing into the search space of the problem and refining the population. As the population changes due to the search so will the optimal parameters. For every control GA generation each of the individuals in the population will be tested for fitness by being run through the problem GA with the assigned parameters. During these runs the population used in the next control generation is compiled. Thus, both the issue of finding the best parameters and the solution to the problem are attacked at the same time. The goal is to optimize the sensor coverage in a square field. The test case used was a 30 by 30 unit field with 100 sensor nodes. Each sensor node had a coverage area of 3 by 3 units. The algorithm attempts to optimize the sensor coverage in the field by moving the nodes. The results show that the control GA will provide better results when compared to a system with no parameter changes.

  7. Automatic parameter optimizer (APO) for multiple-point statistics

    Science.gov (United States)

    Bani Najar, Ehsanollah; Sharghi, Yousef; Mariethoz, Gregoire

    2016-04-01

    Multiple Point statistics (MPS) have gained popularity in recent years for generating stochastic realizations of complex natural processes. The main principle is that a training image (TI) is used to represent the spatial patterns to be modeled. One important feature of MPS is that the spatial model of the fields generated is made of 1) the chosen TI and 2) a set of algorithmic parameters that are specific to each MPS algorithm. While the choice of a training image can be guided by expert knowledge (e.g. for geological modeling) or by data acquisition methods (e.g. remote sensing) determining the algorithmic parameters can be more challenging. To date, only specific guidelines have been proposed for some simulation methods, and a general parameters inference methodology is still lacking, in particular for complex modeling settings such as when using multivariate training images. The common practice consists in carrying out an extensive parameters sensitivity analysis which can be cumbersome. An additional complexity is that the algorithmic parameters do influence CPU cost, and therefore finding optimal parameters is not only a modeling question, but also a computational challenge. To overcome these issues, we propose the automatic parameter optimizer (MPS-APO), a generic method based on stochastic optimization to rapidly determine acceptable parameters, in different settings and for any MPS method. The MPS automatic parameter optimizer proceeds in a 2-step approach. In the first step, it considers the set of input parameters of a given MPS algorithm and formulates an objective function that quantifies the reproduction of spatial patterns. The Simultaneous Perturbation Stochastic Approximation (SPSA) optimization method is used to minimize the objective function. SPSA is chosen because it is able to deal with the stochastic nature of the objective function and for its computational efficiency. At each iteration, small gaps are randomly placed in the input image

  8. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  9. Optimization of Gas Metal Arc Welding Process Parameters

    Science.gov (United States)

    Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.

    2016-09-01

    This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.

  10. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  11. Automated Finite Element Modeling of Wing Structures for Shape Optimization

    Science.gov (United States)

    Harvey, Michael Stephen

    1993-01-01

    The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.

  12. Multiobjective Optimization Method Based on Adaptive Parameter Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    P. Sabarinath

    2015-01-01

    Full Text Available The present trend in industries is to improve the techniques currently used in design and manufacture of products in order to meet the challenges of the competitive market. The crucial task nowadays is to find the optimal design and machining parameters so as to minimize the production costs. Design optimization involves more numbers of design variables with multiple and conflicting objectives, subjected to complex nonlinear constraints. The complexity of optimal design of machine elements creates the requirement for increasingly effective algorithms. Solving a nonlinear multiobjective optimization problem requires significant computing effort. From the literature it is evident that metaheuristic algorithms are performing better in dealing with multiobjective optimization. In this paper, we extend the recently developed parameter adaptive harmony search algorithm to solve multiobjective design optimization problems using the weighted sum approach. To determine the best weightage set for this analysis, a performance index based on least average error is used to determine the index of each weightage set. The proposed approach is applied to solve a biobjective design optimization of disc brake problem and a newly formulated biobjective design optimization of helical spring problem. The results reveal that the proposed approach is performing better than other algorithms.

  13. Automated evolutionary optimization of ion channel conductances and kinetics in models of young and aged rhesus monkey pyramidal neurons.

    Science.gov (United States)

    Rumbell, Timothy H; Draguljić, Danel; Yadav, Aniruddha; Hof, Patrick R; Luebke, Jennifer I; Weaver, Christina M

    2016-08-01

    Conductance-based compartment modeling requires tuning of many parameters to fit the neuron model to target electrophysiological data. Automated parameter optimization via evolutionary algorithms (EAs) is a common approach to accomplish this task, using error functions to quantify differences between model and target. We present a three-stage EA optimization protocol for tuning ion channel conductances and kinetics in a generic neuron model with minimal manual intervention. We use the technique of Latin hypercube sampling in a new way, to choose weights for error functions automatically so that each function influences the parameter search to a similar degree. This protocol requires no specialized physiological data collection and is applicable to commonly-collected current clamp data and either single- or multi-objective optimization. We applied the protocol to two representative pyramidal neurons from layer 3 of the prefrontal cortex of rhesus monkeys, in which action potential firing rates are significantly higher in aged compared to young animals. Using an idealized dendritic topology and models with either 4 or 8 ion channels (10 or 23 free parameters respectively), we produced populations of parameter combinations fitting the target datasets in less than 80 hours of optimization each. Passive parameter differences between young and aged models were consistent with our prior results using simpler models and hand tuning. We analyzed parameter values among fits to a single neuron to facilitate refinement of the underlying model, and across fits to multiple neurons to show how our protocol will lead to predictions of parameter differences with aging in these neurons. PMID:27106692

  14. Optimization of parameters for maximization of plateletpheresis and lymphocytapheresis yields on the Haemonetics Model V50.

    Science.gov (United States)

    AuBuchon, J P; Carter, C S; Adde, M A; Meyer, D R; Klein, H G

    1986-01-01

    Automated apheresis techniques afford the opportunity of tailoring collection parameters for each donor's hematologic profile. This study investigated the effect of various settings of the volume offset parameter as utilized in the Haemonetics Model V50 instrumentation during platelet- and lymphocytapheresis to optimize product yield, purity, and collection efficiency. In both types of procedures, increased product yield could be obtained by using an increased volume offset for donors having lower hematocrits. This improvement was related to an increase in collection efficiency. Platelet products also contained fewer contaminating lymphocytes with this approach. Adjustment of the volume offset parameter can be utilized to make the most efficient use of donors and provide higher-quality products.

  15. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods

    OpenAIRE

    Suleimanov, Yury V.; Green, William H.

    2015-01-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not on...

  16. Extended Range Guided Munition Parameter Optimization Based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Many factors influencing range of extended range guided munition (ERGM) are analyzed. The definition domain of the most important three parameters are ascertained by preparatory mathematical simulation, the optimized mathematical model of ERGM maximum range with boundary conditions is created, and parameter optimization based on genetic algorithm (GA) is adopted. In the GA design, three-point crossover is used and the best chromosome is kept so that the convergence speed becomes rapid. Simulation result shows that GA is feasible, the result is good and it can be easy to attain global optimization solution, especially when the objective function is not the convex one for independent variables and it is a multi-parameter problem.

  17. Parameter optimization of pharmacokinetics based on artificial immune network

    Institute of Scientific and Technical Information of China (English)

    LIU Li; ZHOU Shao-dan; LU Hong-wen; XIE Fen; XU Wen-bo

    2008-01-01

    A new method for parameter optimization of pharmacokinetics based on an artificial immune network named PKAIN is proposed.To improve local searching ability of the artificial immune network,a partition-based concurrent simplex mutation is developed.By means of evolution of network cells in the PKAIN artificial immune network,an optimal set of parameters of a given pharmacokinetic model is obtained.The Laplace transform is applied to the pharmacokinetic difierential equations of remifentanil and its major metabolite,remifentanil acid.The PKAIN method is used to optimize parameters of the derived compartment models.Experimental results show that the twocompartment model is sufficient for the pharmacokinetic study of remifentanil acid for patients with mild degree of renal impairment.

  18. Likelihood transform: making optimization and parameter estimation easier

    CERN Document Server

    Wang, Yan

    2014-01-01

    Parameterized optimization and parameter estimation is of great importance in almost every branch of modern science, technology and engineering. A practical issue in the problem is that when the parameter space is large and the available data is noisy, the geometry of the likelihood surface in the parameter space will be complicated. This makes searching and optimization algorithms computationally expensive, sometimes even beyond reach. In this paper, we define a likelihood transform which can make the structure of the likelihood surface much simpler, hence reducing the intrinsic complexity and easing optimization significantly. We demonstrate the properties of likelihood transform by apply it to a simplified gravitational wave chirp signal search. For the signal with an signal-to-noise ratio 20, likelihood transform has made a deterministic template-based search possible for the first time, which turns out to be 1000 times more efficient than an exhaustive grid- based search. The method in principle can be a...

  19. Damage localization using experimental modal parameters and topology optimization

    OpenAIRE

    Niemann, Hanno; Morlier, Joseph; Shahdin, Amir; Gourinat, Yves

    2010-01-01

    This work focuses on the developement of a damage detection and localization tool using the Topology Optimization feature of MSC.Nastran. This approach is based on the correlation of a local stiness loss and the change in modal parameters due to damages in structures. The loss in stiness is accounted by the Topology Optimization approach for updating undamaged numerical models towards similar models with embedded damages. Hereby, only a mass penalization and the changes in experimentally obta...

  20. Optimal sensor location for parameter identification in soft clay

    Science.gov (United States)

    Hölter, R.; Mahmoudi, E.; Schanz, T.

    2015-10-01

    Performing parameter identification for model calibration prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental set-up, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and reduce the number of monitoring points. This paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability factor of soft clays. With an initial set-up of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the aforementioned parameters. Starting from these identified parameters, the optimal measurement set-up is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity.

  1. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    Science.gov (United States)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  2. Parameter optimization for tandemregenerative system based on critical path

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    For a tandem queue system, the regenerative path is constructed. In an inter-regeneration cycle, the sensitivity value of performance measure with respect to the adjustable parameter θ can be acquired based on a fixed length of observation. Furthermore, a new algorithm of parameter optimization for the tandem queue system is given,which requires less simulation and no analysis for the perturbation transmission and makes a better estimation for the sen sitivity.

  3. OPTIMIZATION OF ELECTROCHEMICAL MACHINING PROCESS PARAMETERS USING TAGUCHI APPROACH

    Directory of Open Access Journals (Sweden)

    R.Goswami

    2013-05-01

    Full Text Available this research paper, Taguchi method is applied to find optimum process parameters for Electrochemical machining (ECM. The objective of experimental investigation is to conduct research of machining parameters impact on MRR and SR of work piece of Aluminum and Mild steel . The approach was based on Taguchi’s method, analysis of variance and signal to noise ratio (S/N Ratio to optimize the Electrochemical machining process parameters for effective machining and to predict the optimal choice for each ECM parameter such asvoltage, tool feed and current. In this research three level of parameter is considered for experiment. There is L9 orthogonal array used by varying A,B,C respectively and for each combination we have conducted three experiments and with the help of Signal to Noise ratio we find out the optimum results for ECM. It was confirmed that determined optimal combination of ECM process parameters satisfy the real need for machining of Aluminum and Mild steel in actual practice.

  4. Wrapped Progressive Sampling Search for Optimizing Learning Algorithm Parameters

    NARCIS (Netherlands)

    Bosch, Antal van den

    2005-01-01

    We present a heuristic meta-learning search method for finding a set of optimized algorithmic parameters for a range of machine learning algo- rithms. The method, wrapped progressive sampling, is a combination of classifier wrapping and progressive sampling of training data. A series of experiments

  5. Optimization of polyetherimide processing parameters for optical interconnect applications

    Science.gov (United States)

    Zhao, Wei; Johnson, Peter; Wall, Christopher

    2015-10-01

    ULTEM® polyetherimide (PEI) resins have been used in opto-electronic markets since the optical properties of these materials enable the design of critical components under tight tolerances. PEI resins are the material of choice for injection molded integrated lens applications due to good dimensional stability, near infrared (IR) optical transparency, low moisture uptake and high heat performance. In most applications, parts must be produced consistently with minimal deviations to insure compatibility throughout the lifetime of the part. With the large number of lenses needed for this market, injection molding has been optimized to maximize the production rate. These optimized parameters for high throughput may or may not translate to an optimized optical performance. In this paper, we evaluate and optimize PEI injection molding processes with a focus on optical property performance. A commonly used commercial grade was studied to determine factors and conditions which contribute to optical transparency, color, and birefringence. Melt temperature, mold temperature, injection speed and cycle time were varied to develop optimization trials and evaluate optical properties. These parameters could be optimized to reduce in-plane birefringence from 0.0148 to 0.0006 in this study. In addition, we have studied an optically smooth, sub-10nm roughness mold to re-evaluate material properties with minimal influence from mold quality and further refine resin and process effects for the best optical performance.

  6. EVALUATION OF ANAEMIA USING RED CELL AND RETICULOCYTE PARAMETERS USING AUTOMATED HAEMATOLOGY ANALYSER

    Directory of Open Access Journals (Sweden)

    Vidyadhar Rao

    2016-06-01

    Full Text Available Use of current models of Automated Haematology Analysers help in calculating the haemoglobin contents of the mature Red cells, Reticulocytes and percentages of Microcytic and hypochromic Red cells. This has helped the clinician in reaching early diagnosis and management of Different haemopoietic disorders like Iron Deficiency Anaemia, Thalassaemia and anaemia of chronic diseases. AIM This study is conducted using an Automated Haematology Analyser to evaluate anaemia using the Red Cell and Reticulocyte parameters. Three types of anaemia were evaluated; iron deficiency anaemia, anaemia of long duration and anaemia associated with chronic disease and Iron deficiency. MATERIALS AND METHODS The blood samples were collected from 287 adult patients with anaemia differentiated depending upon their iron status, haemoglobinopathies and inflammatory activity. Iron deficiency anaemia (n=132, anaemia of long duration (ACD, (n=97 and anaemia associated with chronic disease with iron deficiency (ACD Combi, (n=58. Microcytic Red cells, hypochromic red cells percentage and levels of haemoglobin in reticulocytes and matured RBCs were calculated. The accuracy of the parameters was analysed using receiver operating characteristic analyser to differentiate between the types of anaemia. OBSERVATIONS AND RESULTS There was no difference in parameters between the iron deficiency group or anaemia associated with chronic disease and iron deficiency. The hypochromic red cells percentage was the best parameter in differentiating anaemia of chronic disease with or without absolute iron deficiency with a sensitivity of 72.7% and a specificity of 70.4%. CONCLUSIONS The parameters of red cells and reticulocytes were of reasonably good indicators in differentiating the absolute iron deficiency anaemia with chronic disease.

  7. Automation for pattern library creation and in-design optimization

    Science.gov (United States)

    Deng, Rock; Zou, Elain; Hong, Sid; Wang, Jinyan; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua; Huang, Jason

    2015-03-01

    contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.

  8. Parameter variations in prediction skill optimization at ECMWF

    Science.gov (United States)

    Ollinaho, P.; Bechtold, P.; Leutbecher, M.; Laine, M.; Solonen, A.; Haario, H.; Järvinen, H.

    2013-11-01

    Algorithmic numerical weather prediction (NWP) skill optimization has been tested using the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). We report the results of initial experimentation using importance sampling based on model parameter estimation methodology targeted for ensemble prediction systems, called the ensemble prediction and parameter estimation system (EPPES). The same methodology was earlier proven to be a viable concept in low-order ordinary differential equation systems, and in large-scale atmospheric general circulation models (ECHAM5). Here we show that prediction skill optimization is possible even in the context of a system that is (i) of very high dimensionality, and (ii) carefully tuned to very high skill. We concentrate on four closure parameters related to the parameterizations of sub-grid scale physical processes of convection and formation of convective precipitation. We launch standard ensembles of medium-range predictions such that each member uses different values of the four parameters, and make sequential statistical inferences about the parameter values. Our target criterion is the squared forecast error of the 500 hPa geopotential height at day three and day ten. The EPPES methodology is able to converge towards closure parameter values that optimize the target criterion. Therefore, we conclude that estimation and cost function-based tuning of low-dimensional static model parameters is possible despite the very high dimensional state space, as well as the presence of stochastic noise due to initial state and physical tendency perturbations. The remaining question before EPPES can be considered as a generally applicable tool in model development is the correct formulation of the target criterion. The one used here is, in our view, very selective. Considering the multi-faceted question of improving forecast model performance, a more general target criterion should be developed

  9. Parameters Optimization of Low Carbon Low Alloy Steel Annealing Process

    Institute of Scientific and Technical Information of China (English)

    Maoyu ZHAO; Qianwang CHEN

    2013-01-01

    A suitable match of annealing process parameters is critical for obtaining the fine microstructure of material.Low carbon low alloy steel (20CrMnTi) was heated for various durations near Ac temperature to obtain fine pearlite and ferrite grains.Annealing temperature and time were used as independent variables,and material property data were acquired by orthogonal experiment design under intercritical process followed by subcritical annealing process (IPSAP).The weights of plasticity (hardness,yield strength,section shrinkage and elongation) of annealed material were calculated by analytic hierarchy process,and then the process parameters were optimized by the grey theory system.The results observed by SEM images show that microstructure of optimization annealing material are consisted of smaller lamellar pearlites (ferrite-cementite)and refining ferrites which distribute uniformly.Morphologies on tension fracture surface of optimized annealing material indicate that the numbers of dimple fracture show more finer toughness obviously comparing with other annealing materials.Moreover,the yield strength value of optimization annealing material decreases apparently by tensile test.Thus,the new optimized strategy is accurate and feasible.

  10. Automated gamma knife radiosurgery treatment planning with image registration, data-mining, and Nelder-Mead simplex optimization

    International Nuclear Information System (INIS)

    Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes (30 000 cubic mm,>20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically

  11. Novel Approach to Nonlinear PID Parameter Optimization Using Ant Colony Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    Duan Hai-bin; Wang Dao-bo; Yu Xiu-fen

    2006-01-01

    This paper presents an application of an Ant Colony Optimization (ACO) algorithm to optimize the parameters in the design of a type of nonlinear PID controller. The ACO algorithm is a novel heuristic bionic algorithm, which is based on the behaviour of real ants in nature searching for food. In order to optimize the parameters of the nonlinear PID controller using ACO algorithm,an objective function based on position tracing error was constructed, and elitist strategy was adopted in the improved ACO algorithm. Detailed simulation steps are presented. This nonlinear PID controller using the ACO algorithm has high precision of control and quick response.

  12. Identification of optimal parameter combinations for the emergence of bistability

    Science.gov (United States)

    Májer, Imre; Hajihosseini, Amirhossein; Becskei, Attila

    2015-12-01

    Bistability underlies cellular memory and maintains alternative differentiation states. Bistability can emerge only if its parameter range is either physically realizable or can be enlarged to become realizable. We derived a general rule and showed that the bistable range of a reaction parameter is maximized by a pair of other parameters in any gene regulatory network provided they satisfy a general condition. The resulting analytical expressions revealed whether or not such reaction pairs are present in prototypical positive feedback loops. They are absent from the feedback loop enclosed by protein dimers but present in both the toggle-switch and the feedback circuit inhibited by sequestration. Sequestration can generate bistability even at narrow feedback expression range at which cooperative binding fails to do so, provided inhibition is set to an optimal value. These results help to design bistable circuits and cellular reprogramming and reveal whether bistability is possible in gene networks in the range of realistic parameter values.

  13. Cosmological parameter estimation using Particle Swarm Optimization (PSO)

    CERN Document Server

    Prasad, Jayanti

    2011-01-01

    Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmo...

  14. NWP model forecast skill optimization via closure parameter variations

    Science.gov (United States)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  15. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    Science.gov (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  16. Study on optimization of parameters in a biological model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    According to the data observed in a China- Japan Joint Investigation, the parameters of an ecosystem dynamics model (Qiao et al., 2000) were optimized. The values of eighteen parameters for the model were obtained, with nutrient haft saturation constant, Kn = 1.4 μmol/dm3, Kp = 0.129 μmol/dm3 and Ks= 1.16μmol/dm3 for the diatom and Kn=0.345μmol/dm3, Kp=0.113 μmol/dm3 for the flagellate. Three proposals to set up a function for this multiple objective problem were discussed in detail.

  17. Optimizing bowtie structure parameters for specific incident light

    Institute of Scientific and Technical Information of China (English)

    Wang Qiao; WU Shi-Fa; Li Xu-Feng; Wang Xiao-Gang

    2010-01-01

    We investigate optical properties of a bowtie-shaped aperture using the finite difference time domain method to optimize its geometric parameters for specific incident lights. The influence of the parameters on local field enhancement and resonant wavelength in the visible frequency range is numerically analysed. It is found that the major resonance of the spectrum is exponentially depended on the bowtie angle but independent of the whole aperture size. The simulation also demonstrates that increasing the aperture size raises the local field intensity on the exit plane due to an enlarged interaction area between the light and the metal medium. And the near-field spot size is closely related to the gap.Based on these results, the design rules of the bowtie structure can be optimized for specific wavelengths excited.

  18. Using string invariants for prediction searching for optimal parameters

    Science.gov (United States)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  19. The optimization of operating parameters on microalgae upscaling process planning.

    Science.gov (United States)

    Ma, Yu-An; Huang, Hsin-Fu; Yu, Chung-Chyi

    2016-03-01

    The upscaling process planning developed in this study primarily involved optimizing operating parameters, i.e., dilution ratios, during process designs. Minimal variable cost was used as an indicator for selecting the optimal combination of dilution ratios. The upper and lower mean confidence intervals obtained from the actual cultured cell density data were used as the final cell density stability indicator after the operating parameters or dilution ratios were selected. The process planning method and results were demonstrated through three case studies of batch culture simulation. They are (1) final objective cell densities were adjusted, (2) high and low light intensities were used for intermediate-scale cultures, and (3) the number of culture days was expressed as integers for the intermediate-scale culture.

  20. Optimization of E. coli Cultivation Model Parameters Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2012-04-01

    Full Text Available In this paper, a novel meta-heuristics algorithm, namely the Firefly Algorithm (FA, is adapted and applied for a model parameter identification of an E. coli fed-batch cultivation process. A system of ordinary nonlinear differential equations is used to model the biomass growth and substrate utilization. Parameter optimization is performed using real experimental data set from an E. coli MC4110 fed-batch cultivation process. The FA adjustments are done based on several pre-tests according to the optimization problem considered here. The simulation results indicate that the applied algorithm is effective and efficient. As a result, a model with high degree of accuracy is obtained applying the FA.

  1. Parameter Optimization for Laser Polishing of Niobium for SRF Applications

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Liang [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States) and William and Mary College, Williamsburg, VA (United States); Klopf, John Michael [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Reece, Charles E. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Kelley, Michael J. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States) and William and Mary College, Williamsburg, VA (United States)

    2013-06-01

    Surface smoothness is critical to the performance of SRF cavities. As laser technology has been widely applied to metal machining and surface treatment, we are encouraged to use it on niobium as an alternative to the traditional wet polishing process where aggressive chemicals are involved. In this study, we describe progress toward smoothing by optimizing laser parameters on BCP treated niobium surfaces. Results shows that microsmoothing of the surface without ablation is achievable.

  2. Optimization of the drying parameters of a veneer roller dryer

    OpenAIRE

    Marttila, Heikki

    2014-01-01

    The objective of the master’s thesis was to experimentally find the optimal drying parameters for spruce heartwood veneers in terms of veneer quality and drying capacity. The quality was referred to moisture content, moisture deviation, tensile strength (across the grain direction), surface roughness, wettability, waviness and other visual defects. The strength properties of plywood were excluded from the study. The mill experiments were conducted at UPM Pellos 3 jet roller dryer in June ...

  3. Limiting Behaviour in Parameter Optimal Iterative Learning Control

    Institute of Scientific and Technical Information of China (English)

    David H. Owens; Maria Tomas-Rodriguez; Jari J. Hat(o)nen

    2006-01-01

    This paper analyses the concept of a Limit Set in Parameter Optimal Iterative Learning Control (ILC). We investigate the existence of stable and unstable parts of Limit Set and demonstrates that they will often exist in practice.This is illustrated via a 2-dimensional example where the convergence of the learning algorithm is analyzed from the error's dynamic behaviour. These ideas are extended to the N-dimensional cases by analogy and example.

  4. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  5. Optimizing spectral CT parameters for material classification tasks.

    Science.gov (United States)

    Rigie, D S; La Rivière, P J

    2016-06-21

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC's) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC's predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430

  6. Optimal construction parameters of electrosprayed trilayer organic photovoltaic devices

    International Nuclear Information System (INIS)

    A detailed investigation of the optimal set of parameters employed in multilayer device fabrication obtained through successive electrospray deposited layers is reported. In this scheme, the donor/acceptor (D/A) bulk heterojunction layer is sandwiched between two thin stacked layers of individual donor and acceptor materials. The stacked layers geometry with optimal thicknesses plays a decisive role in improving operation characteristics. Among the parameters of the multilayer organic photovoltaics device, the D/A concentration ratio, blend thickness and stacking layers thicknesses are optimized. Other parameters, such as thermal annealing and the role of top metal contacts, are also discussed. Internal photon to current efficiency is found to attain a strong response in the 500 nm optical region for the most efficient device architectures. Such an observation indicates a clear interplay between photon harvesting of active layers and transport by ancillary stacking layers, opening up the possibility to engineer both the material fine structure and the device architecture to obtain the best photovoltaic response from a complex organic heterostructure. (paper)

  7. Optimizing spectral CT parameters for material classification tasks

    Science.gov (United States)

    Rigie, D. S.; La Rivière, P. J.

    2016-06-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.

  8. Optimization of laser butt welding parameters with multiple performance characteristics

    Science.gov (United States)

    Sathiya, P.; Abdul Jaleel, M. Y.; Katherasan, D.; Shanmugarajan, B.

    2011-04-01

    This paper presents a study carried out on 3.5 kW cooled slab laser welding of 904 L super austenitic stainless steel. The joints have butts welded with different shielding gases, namely argon, helium and nitrogen, at a constant flow rate. Super austenitic stainless steel (SASS) normally contains high amount of Mo, Cr, Ni, N and Mn. The mechanical properties are controlled to obtain good welded joints. The quality of the joint is evaluated by studying the features of weld bead geometry, such as bead width (BW) and depth of penetration (DOP). In this paper, the tensile strength and bead profiles (BW and DOP) of laser welded butt joints made of AISI 904 L SASS are investigated. The Taguchi approach is used as a statistical design of experiment (DOE) technique for optimizing the selected welding parameters. Grey relational analysis and the desirability approach are applied to optimize the input parameters by considering multiple output variables simultaneously. Confirmation experiments have also been conducted for both of the analyses to validate the optimized parameters.

  9. Damage localization using experimental modal parameters and topology optimization

    Science.gov (United States)

    Niemann, Hanno; Morlier, Joseph; Shahdin, Amir; Gourinat, Yves

    2010-04-01

    This work focuses on the development of a damage detection and localization tool using the topology optimization feature of MSC.Nastran. This approach is based on the correlation of a local stiffness loss and the change in modal parameters due to damages in structures. The loss in stiffness is accounted by the topology optimization approach for updating undamaged numerical models towards similar models with embedded damages. Hereby, only a mass penalization and the changes in experimentally obtained modal parameters are used as objectives. The theoretical background for the implementation of this method is derived and programmed in a Nastran input file and the general feasibility of the approach is validated numerically, as well as experimentally by updating a model of an experimentally tested composite laminate specimen. The damages have been introduced to the specimen by controlled low energy impacts and high quality vibration tests have been conducted on the specimen for different levels of damage. These supervised experiments allow to test the numerical diagnosis tool by comparing the result with both NDT technics and results of previous works (concerning shifts in modal parameters due to damage). Good results have finally been achieved for the localization of the damages by the topology optimization.

  10. Optimizing casting parameters of steel ingot based on orthogonal method

    Institute of Scientific and Technical Information of China (English)

    张沛; 李学通; 臧新良; 杜凤山

    2008-01-01

    The influence and signification of casting parameters on the solidification process of steel ingot were discussed based on the finite element method (FEM) results by orthogonal experiment method. The range analysis, analysis of variance (ANOVA) and optimization project were used to investigate the FEM results. In order to decrease the ingot riser head and improve the utilization ratio of ingot, the casting parameters involved casting temperature, pouring velocity and interface heat transfer were optimized to decrease shrinkage pore and microporosity. The results show that the heat transfer coefficient between melt and heated board is a more sensitive factor. It is favor to decrease the shrinkage pore and microporosity under the conditions of low temperature, high pouring velocity and high heat transfer between melt and mold. If heat transfer in the ingot body is quicker than that in the riser, the position of shrinkage pore and microporosity will be closer to riser top. The results of optimization project show that few of shrinkage pore and microporosity reach into ingot body with the rational parameters, so the riser size can be reduced.

  11. Optimization of Parameter Selection for Partial Least Squares Model Development

    Science.gov (United States)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  12. Structural parameter optimization design for Halbach permanent maglev rail

    Energy Technology Data Exchange (ETDEWEB)

    Guo, F., E-mail: guofang19830119@163.co [R and D Center of Applied Superconductivity, Huazhong University of Science and Technology, Wuhan 430074 (China); Tang, Y.; Ren, L.; Li, J. [R and D Center of Applied Superconductivity, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2010-11-01

    Maglev rail is an important part of the magnetic levitation launch system. Reducing the manufacturing cost of magnetic levitation rail is the key problem for the development of magnetic levitation launch system. The Halbach permanent array has an advantage that the fundamental spatial field is cancelled on one side of the array while the field on the other side is enhanced. So this array used in the design of high temperature superconducting permanent maglev rail could improve the surface magnetic field and the levitation force. In order to make the best use of Nd-Fe-B (NdFeB) material and reduce the cost of maglev rail, the effect of the rail's structural parameters on levitation force and the utilization rate of NdFeB material are analyzed. The optimal ranges of these structural parameters are obtained. The mutual impact of these parameters is also discussed. The optimization method of these structure parameters is proposed at the end of this paper.

  13. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  14. Mathematical Modelling and Parameter Optimization of Pulsating Heat Pipes

    CERN Document Server

    Yang, Xin-She; Luan, Tao; Koziel, Slawomir

    2014-01-01

    Proper heat transfer management is important to key electronic components in microelectronic applications. Pulsating heat pipes (PHP) can be an efficient solution to such heat transfer problems. However, mathematical modelling of a PHP system is still very challenging, due to the complexity and multiphysics nature of the system. In this work, we present a simplified, two-phase heat transfer model, and our analysis shows that it can make good predictions about startup characteristics. Furthermore, by considering parameter estimation as a nonlinear constrained optimization problem, we have used the firefly algorithm to find parameter estimates efficiently. We have also demonstrated that it is possible to obtain good estimates of key parameters using very limited experimental data.

  15. Optimization of Neutrino Oscillation Parameters Using Differential Evolution

    Institute of Scientific and Technical Information of China (English)

    Ghulam Mustafa; Faisal Akram; Bilal Masud

    2013-01-01

    We show how the traditional grid based method for finding neutrino oscillation parameters △m2 and tan2θ can be combined with an optimization technique,Differential Evolution (DE),to get a significant decrease in computer processing time required to obtain minimal chi-square (x2) in four different regions of the parameter space.We demonstrate efficiency for the two-neutrinos case.For this,the x2 function for neutrino oscillations is evaluated for grids with different density of points in standard allowed regions of the parameter space of △m2 and tan2 θ using experimental and theoretical total event rates of chlorine (Homestake),Gallex+GNO,SAGE,Superkamiokande,and SNO detectors.We find that using DE in combination with the grid based method with small density of points can produce the results comparable with the one obtained using high density grid,in much lesser computation time.

  16. Considerations for parameter optimization and sensitivity in climate models

    Science.gov (United States)

    Neelin, J. David; Bracco, Annalisa; Luo, Hao; McWilliams, James C.; Meyerson, Joyce E.

    2010-01-01

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention—here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  17. Optimal Selection of Parameters for Nonuniform Embedding of Chaotic Time Series Using Ant Colony Optimization.

    Science.gov (United States)

    Shen, Meie; Chen, Wei-Neng; Zhang, Jun; Chung, Henry Shu-Hung; Kaynak, Okyay

    2013-04-01

    The optimal selection of parameters for time-delay embedding is crucial to the analysis and the forecasting of chaotic time series. Although various parameter selection techniques have been developed for conventional uniform embedding methods, the study of parameter selection for nonuniform embedding is progressed at a slow pace. In nonuniform embedding, which enables different dimensions to have different time delays, the selection of time delays for different dimensions presents a difficult optimization problem with combinatorial explosion. To solve this problem efficiently, this paper proposes an ant colony optimization (ACO) approach. Taking advantage of the characteristic of incremental solution construction of the ACO, the proposed ACO for nonuniform embedding (ACO-NE) divides the solution construction procedure into two phases, i.e., selection of embedding dimension and selection of time delays. In this way, both the embedding dimension and the time delays can be optimized, along with the search process of the algorithm. To accelerate search speed, we extract useful information from the original time series to define heuristics to guide the search direction of ants. Three geometry- or model-based criteria are used to test the performance of the algorithm. The optimal embeddings found by the algorithm are also applied in time-series forecasting. Experimental results show that the ACO-NE is able to yield good embedding solutions from both the viewpoints of optimization performance and prediction accuracy. PMID:23144038

  18. Process Parameters Optimization in Single Point Incremental Forming

    Science.gov (United States)

    Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh

    2016-04-01

    This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.

  19. Comparison and Application of Metaheuristic Population-Based Optimization Algorithms in Manufacturing Automation

    Directory of Open Access Journals (Sweden)

    Rhythm Suren Wadhwa

    2011-11-01

    Full Text Available The paper presents a comparison and application of metaheuristic population-based optimization algorithms to a flexible manufacturing automation scenario in a metacasting foundry. It presents a novel application and comparison of Bee Colony Algorithm (BCA with variations of Particle Swarm Optimization (PSO and Ant Colony Optimization (ACO for object recognition problem in a robot material handling system. To enable robust pick and place activity of metalcasted parts by a six axis industrial robot manipulator, it is important that the correct orientation of the parts is input to the manipulator, via the digital image captured by the vision system. This information is then used for orienting the robot gripper to grip the part from a moving conveyor belt. The objective is to find the reference templates on the manufactured parts from the target landscape picture which may contain noise. The Normalized cross-correlation (NCC function is used as an objection function in the optimization procedure. The ultimate goal is to test improved algorithms that could prove useful in practical manufacturing automation scenarios.

  20. Automated scheme to determine design parameters for a recoverable reentry vehicle

    International Nuclear Information System (INIS)

    The NRV (Nosetip Recovery Vehicle) program at Sandia Laboratories is designed to recover the nose section from a sphere cone reentry vehicle after it has flown a near ICBM reentry trajectory. Both mass jettison and parachutes are used to reduce the velocity of the RV near the end of the trajectory to a sufficiently low level that the vehicle may land intact. The design problem of determining mass jettison time and parachute deployment time in order to ensure that the vehicle does land intact is considered. The problem is formulated as a min-max optimization problem where the design parameters are to be selected to minimize the maximum possible deviation in the design criteria due to uncertainties in the system. The results of the study indicate that the optimal choice of the design parameters ensures that the maximum deviation in the design criteria is within acceptable bounds. This analytically ensures the feasibility of recovery for NRV

  1. Optimizing experimental parameters for tracking of diffusing particles

    Science.gov (United States)

    Vestergaard, Christian L.

    2016-08-01

    We describe how a single-particle tracking experiment should be designed in order for its recorded trajectories to contain the most information about a tracked particle's diffusion coefficient. The precision of estimators for the diffusion coefficient is affected by motion blur, limited photon statistics, and the length of recorded time series. We demonstrate for a particle undergoing free diffusion that precision is negligibly affected by motion blur in typical experiments, while optimizing photon counts and the number of recorded frames is the key to precision. Building on these results, we describe for a wide range of experimental scenarios how to choose experimental parameters in order to optimize the precision. Generally, one should choose quantity over quality: experiments should be designed to maximize the number of frames recorded in a time series, even if this means lower information content in individual frames.

  2. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    Science.gov (United States)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  3. Total energy control system autopilot design with constrained parameter optimization

    Science.gov (United States)

    Ly, Uy-Loi; Voth, Christopher

    1990-01-01

    A description is given of the application of a multivariable control design method (SANDY) based on constrained parameter optimization to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the direct synthesis of a multiloop AFCS inner-loop feedback control system based on total energy control system (TECS) principles. The design procedure offers a structured approach for the determination of a set of stabilizing controller design gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The approach can be extended to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by proper formulation of the design objectives and constraints. Satisfactory designs are usually obtained in few iterations. Performance characteristics of the optimized TECS design have been improved, particularly in the areas of closed-loop damping and control activity in the presence of turbulence.

  4. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  5. Optimization of Parameters for Melt Crystallization of p-Cresolt

    Institute of Scientific and Technical Information of China (English)

    丛山; 李鑫钢; 邬俊; 许长春

    2012-01-01

    Laboratory-scale experiments were carried out to evaluate the influences of operational parameters on the melt crystallization efficiency for p-cresol purification. The optimal c.rystallization conditions were determined: dynamic pulsed aeration at 90 L·h-1 and the cooling rate of 0.6 0.8 ℃min^-1, followed by sweating at 0.2-0.3 ℃.min^-1 for 40 min. Results also demonstrate that the melt crystallization efficiency is sensitive to feed concentration, which highlights this technology for separation and purification of high purity products.

  6. Parameter optimization in AQM controller design to support TCP traffic

    Science.gov (United States)

    Yang, Wei; Yang, Oliver W.

    2004-09-01

    TCP congestion control mechanism has been widely investigated and deployed on Internet in preventing congestion collapse. We would like to employ modern control theory to specify quantitatively the control performance of the TCP communication system. In this paper, we make use of a commonly used performance index called the Integral of the Square of the Error (ISE), which is a quantitative measure to gauge the performance of a control system. By applying the ISE performance index into the Proportional-plus-Integral controller based on Pole Placement (PI_PP controller) for active queue management (AQM) in IP routers, we can further tune the parameters for the controller to achieve an optimum control minimizing control errors. We have analyzed the dynamic model of the TCP congestion control under this ISE, and used OPNET simulation tool to verify the derived optimized parameters of the controllers.

  7. Optimal VLF Parameters for Pitch Angle Scattering of Trapped Electrons

    Science.gov (United States)

    Albert, J. M.; Inan, U. S.

    2001-12-01

    VLF waves are known to determine the lifetimes of energetic radiation belt electrons in the inner radiation belt and slot regions. Artificial injection of such waves from ground- or space-based transmitters may thus be used to affect the trapped electron population. In this paper, we seek to determine the optimal parameters (frequency and wave normal angle) of a quasi-monochromatic VLF wave using bounce-averaged quasi-linear theory. We consider the cumulative effects of all harmonic resonances and determine the diffusion rates of particles with selected energies on particular L-shells. We also compare the effects of the VLF wave to diffusion driven by other whistler-mode waves (plasmaspheric hiss, lightning, and VLF transmitters). With appropriate choice of the wave parameters, it may be possible to substantially reduce the lifetime of selected classes of particles.

  8. Automated Software Testing Using Metahurestic Technique Based on An Ant Colony Optimization

    CERN Document Server

    Srivastava, Praveen Ranjan

    2011-01-01

    Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that's why there is a need to automate the software testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based type of software systems. Aim of the current paper is to present an algorithm by applying an ant colony optimization technique, for generation of optimal and minimal test sequences for behavior specification of software. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing

  9. GAUFRE: A tool for an automated determination of atmospheric parameters from spectroscopy

    Directory of Open Access Journals (Sweden)

    Fossati L.

    2013-03-01

    Full Text Available We present an automated tool for measuring atmospheric parameters (Teff, log g, [Fe/H] for F-G-K dwarf and giant stars. The tool, called GAUFRE, is composed of several routines written in C++: GAUFRE-RV measures radial velocity from spectra via cross-correlation against a synthetic template, GAUFRE-EW measures atmospheric parameters through the classic line-by-line technique and GAUFRE-CHI2 performs a ��2 fitting to a library of synthetic spectra. A set of F-G-K stars extensively studied in the literature were used as a benchmark for the program: their high signal-to-noise and high resolution spectra were analyzed by using GAUFRE and results were compared with those present in literature. The tool is also implemented in order to perform the spectral analysis after fixing the surface gravity (log g to the accurate value provided by asteroseismology. A set of CoRoT stars, belonging to LRc01 and LRa01 fields was used for first testing the performances and the behavior of the program when using the seismic log g.

  10. GAUFRE: a tool for an automated determination of atmospheric parameters from spectroscopy

    CERN Document Server

    Valentini, Marica; Miglio, Andrea; Fossati, Luca; Munari, Ulisse

    2013-01-01

    We present an automated tool for measuring atmospheric parameters (T_eff, log(g), [Fe/H]) for F-G-K dwarf and giant stars. The tool, called GAUFRE, is written in C++ and composed of several routines: GAUFRE-RV measures radial velocity from spectra via cross-correlation against a synthetic template, GAUFRE-EW measures atmospheric parameters through the classic line-by-line technique and GAUFRE-CHI2 performs a chi^2 fitting to a library of synthetic spectra. A set of F-G-K stars extensively studied in the literature were used as a benchmark for the program: their high signal-to-noise and high resolution spectra were analysed by using GAUFRE and results were compared with those present in literature. The tool is also implemented in order to perform the spectral analysis after fixing the surface gravity (log(g)) to the accurate value provided by asteroseismology. A set of CoRoT stars, belonging to LRc01 and LRa01 fields was used for first testing the performances and the behaviour of the program when using the se...

  11. Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.

    Science.gov (United States)

    Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin

    2016-01-01

    Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates. PMID:26970800

  12. Optimizing the Pulsed Current Gas Tungsten Arc Welding Parameters

    Institute of Scientific and Technical Information of China (English)

    M. Balasubramanian; V. Jayabalan; V. Balasubramanian

    2006-01-01

    The selection of process parameter in the gas tungsten arc (GTA) welding of titanium alloy was presented for obtaining optimum grain size and hardness. Titanium alloy (Ti-6Al-4V) is one of the most important non-ferrous metals which offers great potential application in aerospace, biomedical and chemical industries,because of its low density (4.5 g/cm3), excellent corrosion resistance, high strength, attractive fracture behaviour and high melting point (1678℃). The preferred welding process for titanium alloy is frequent GTA welding due to its comparatively easier applicability and better economy. In the case of single pass (GTA)welding of thinner section of this alloy, the pulsed current has been found beneficial due to its advantages over the conventional continuous current process. Many considerations come into the picture and one needs to carefully balance various pulse current parameters to reach an optimum combination. Four factors, five level, central composite, rotatable design matrix were used to optimize the required number of experimental conditions. Mathematical models were developed to predict the fusion zone grain size using analysis of variance (ANOVA) and regression analysis. The developed models were optimized using the traditional Hooke and Jeeve's algorithm. Experimental results were provided to illustrate the proposed approach.

  13. Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.

    Science.gov (United States)

    Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin

    2016-01-01

    Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates.

  14. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  15. Parameter Estimation of Induction Motors Using Water Cycle Optimization

    Directory of Open Access Journals (Sweden)

    M. Yazdani-Asrami

    2013-12-01

    Full Text Available This paper presents the application of recently introduced water cycle algorithm (WCA to optimize the parameters of exact and approximate induction motor from the nameplate data. Considering that induction motors are widely used in industrial applications, these parameters have a significant effect on the accuracy and efficiency of the motors and, ultimately, the overall system performance. Therefore, it is essential to develop algorithms for the parameter estimation of the induction motor. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams flow to the sea in the real world. The objective function is defined as the minimization of the real values of the relative error between the measured and estimated torques of the machine in different slip points. The proposed WCA approach has been applied on two different sample motors. Results of the proposed method have been compared with other previously applied Meta heuristic methods on the problem, which show the feasibility and the fast convergence of the proposed approach.

  16. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    Science.gov (United States)

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920

  17. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods

    CERN Document Server

    Suleimanov, Yury V

    2015-01-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the possibility of discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  18. Optimal Control and Coordination of Connected and Automated Vehicles at Urban Traffic Intersections

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yue J. [Boston University; Malikopoulos, Andreas [ORNL; Cassandras, Christos G. [Boston University

    2016-01-01

    We address the problem of coordinating online a continuous flow of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area. We present a decentralized optimal control framework whose solution yields for each vehicle the optimal acceleration/deceleration at any time in the sense of minimizing fuel consumption. The solu- tion, when it exists, allows the vehicles to cross the intersections without the use of traffic lights, without creating congestion on the connecting road, and under the hard safety constraint of collision avoidance. The effectiveness of the proposed solution is validated through simulation considering two intersections located in downtown Boston, and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time.

  19. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    Science.gov (United States)

    Suleimanov, Yury V; Green, William H

    2015-09-01

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  20. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  1. Parameter optimization in differential geometry based solvation models.

    Science.gov (United States)

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  2. Optimizing resistance spot welding parameters for vibration damping steel sheets

    Energy Technology Data Exchange (ETDEWEB)

    Oberle, H. [Centre de Recherches et Developpements Metallurgiques, Sollac (France); Commaret, C.; Minier, C. [Automobiles Citroeen PSA (France); Magnaud, R. [Direction des Methodes Carrosserie, Renault (France); Pradere, G. [Materials Engineering Dept., Renault (France)

    1998-01-01

    In order to meet the growing demand for functionality and comfort in vehicles, weight and quietness are major concerns for carmakers and materials suppliers. Noise reduction by damping vibrations can meet both aspects. Therefore, steelmakers have developed vibration damping steel sheets (VDSS), which are a three-layer composite material composed of two steel sheets sandwiching a viscoelastic resin core. Industrial use of VDSS in automobiles usually implies the product can be resistance welded. The intent of this investigation is to set up rules to optimize resistance spot welding of VDSS. Two phenomena are the focus of this research: the reduction of blistering and gas expulsion holes. Different aspects are studied, such as the effect of polymer presence and of electrode shape on welding domain and the evaluation of the influence of a welding schedule on blistering and expulsion holes. It appears that polymer presence has no effect on domain width, but does on its position. Higher frequency of expulsion holes with truncated electrodes can be explained with mechanical considerations. From the influence of short circuit voltage, current delay angle and welding schedule on the frequency of gas expulsion holes, a mechanism responsible for expulsion holes is proposed and optimal welding parameters are given.

  3. Robust integrated autopilot/autothrottle design using constrained parameter optimization

    Science.gov (United States)

    Ly, Uy-Loi; Voth, Christopher; Sanjay, Swamy

    1990-01-01

    A multivariable control design method based on constrained parameter optimization was applied to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the following: (1) direct synthesis of a multivariable 'inner-loop' feedback control system based on total energy control principles; (2) synthesis of speed/altitude-hold designs as 'outer-loop' feedback/feedforward control systems around the above inner loop; and (3) direct synthesis of a combined 'inner-loop' and 'outer-loop' multivariable control system. The design procedure offers a direct and structured approach for the determination of a set of controller gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The presented approach may be applied to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by this method following careful problem formulation of the design objectives and constraints. Performance characteristics of the optimization design were improved over the current autopilot design on the B737-100 Transport Research Vehicle (TSRV) at the landing approach and cruise flight conditions; particularly in the areas of closed-loop damping, command responses, and control activity in the presence of turbulence.

  4. Production of Biodiesel from Thumba Oil: Optimization of Process Parameters

    Directory of Open Access Journals (Sweden)

    Ashish Karnwal

    2010-12-01

    Full Text Available Fast depletion of world’s petroleum reserves and increasing ecological concerns has created a great demand for environmentally benign renewable energy resources. Biodiesel has emerged as a sustainable alternative to petroleum origin diesel and its usage have been encouraged by many countries. Transesterification reaction is the most common process to produce biodiesel from variety of vegetable oils and animal fat. Transesterification process depends upon a number of process parameters which are required to be optimized in order to maximize the biodiesel yield. Thumba oil is an underutilized non-edible vegetable oil, available in large quantities in Rajasthan, India and its potential suitability as a biodiesel feedstock is still not evaluated comprehensively. In this research paper, the transesterification process for production of Thumba oil methyl ester has been analyzed and the various process variables like temperature, catalyst concentration, amount of methanol and reaction time have been optimized with the objective to maximize yield. The optimum conditions for transesterification of Thumba oil with methanol and KOH as catalyst were found to be 60°C reaction temperature, 6:1 molar ratio of Thumba oil to methanol, 0.75% catalyst (w/w and 1 hour reaction time.

  5. High Temperature Epoxy Foam: Optimization of Process Parameters

    Directory of Open Access Journals (Sweden)

    Samira El Gazzani

    2016-06-01

    Full Text Available For many years, reduction of fuel consumption has been a major aim in terms of both costs and environmental concerns. One option is to reduce the weight of fuel consumers. For this purpose, the use of a lightweight material based on rigid foams is a relevant choice. This paper deals with a new high temperature epoxy expanded material as substitution of phenolic resin, classified as potentially mutagenic by European directive Reach. The optimization of thermoset foam depends on two major parameters, the reticulation process and the expansion of the foaming agent. Controlling these two phenomena can lead to a fully expanded and cured material. The rheological behavior of epoxy resin is studied and gel time is determined at various temperatures. The expansion of foaming agent is investigated by thermomechanical analysis. Results are correlated and compared with samples foamed in the same temperature conditions. The ideal foaming/gelation temperature is then determined. The second part of this research concerns the optimization of curing cycle of a high temperature trifunctional epoxy resin. A two-step curing cycle was defined by considering the influence of different curing schedules on the glass transition temperature of the material. The final foamed material has a glass transition temperature of 270 °C.

  6. Optimization of system parameters for a complete multispectral polarimeter

    International Nuclear Information System (INIS)

    We optimize a general class of complete multispectral polarimeters with respect to signal-to-noise ratio, stability against alignment errors, and the minimization of errors regarding a given set of polarization states. The class of polarimeters that are dealt with consists of at least four polarization optics each with a multispectral detector. A polarization optic is made of an azimuthal oriented wave plate and a polarizing filter. A general, but not unique, analytic solution that minimizes signal-to-noise ratio is introduced for a polarimeter that incorporates four simultaneous measurements with four independent optics. The optics consist of four sufficient wave plates, where at least one is a quarter-wave plate. The solution is stable with respect to the retardance of the quarter-wave plate; therefore, it can be applied to real-world cases where the retardance deviates from λ/4. The solution is a set of seven rotational parameters that depends on the given retardances of the wave plates. It can be applied to a broad range of real world cases. A numerical method for the optimization of arbitrary polarimeters of the type discussed is also presented and applied for two cases. First, the class of polarimeters that were analytically dealt with are further optimized with respect to stability and error performance with respect to linear polarized states. Then a multispectral case for a polarimeter that consists of four optics with real achromatic wave plates is presented. This case was used as the theoretical background for the development of the Airborne Multi-Spectral Sunphoto- and Polarimeter (AMSSP), which is an instrument for the German research aircraft HALO.

  7. Optimal z-axis scanning parameters for gynecologic cytology specimens

    Directory of Open Access Journals (Sweden)

    Amber D Donnelly

    2013-01-01

    Full Text Available Background: The use of virtual microscopy (VM in clinical cytology has been limited due to the inability to focus through three dimensional (3D cell clusters with a single focal plane (2D images. Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN specimens prepared on SurePath™ glass slides while maintaining a manageable file size. Subjects and Methods: The iScanCoreo Au scanner (Ventana, AZ, USA was used to digitize 192 SurePath™ glass slides at three focal plane levels at 1 μ interval. The digitized virtual images (VI were annotated using BioImagene′s Image Viewer. Five participants interpreted the VI and recorded the focal plane level at which they felt confident and later interpreted the corresponding glass slide specimens using light microscopy (LM. The participants completed a survey about their experiences. Inter-rater agreement and concordance between the VI and the glass slide specimens were evaluated. Results: This study determined an overall high intra-rater diagnostic concordance between glass and VI (89-97%, however, the inter-rater agreement for all cases was higher for LM (94% compared with VM (82%. Survey results indicate participants found low grade dysplasia and koilocytes easy to diagnose using three focal plane levels, the image enhancement tool was useful and focusing through the cells helped with interpretation; however, the participants found VI with hyperchromatic crowded groups challenging to interpret. Participants reported they prefer using LM over VM. This study supports using three focal plane levels and 1 μ interval to expand the use of VM in GYN cytology. Conclusion: Future improvements in technology and appropriate training should make this format a more preferable and practical option in clinical cytology.

  8. Optimization of the operating parameters of the LHCb muon system

    CERN Document Server

    Gruber, L; Schmidt, B

    2010-01-01

    LHCb is a $B$ physics experiment at the Large Hadron Collider (LHC) at CERN. The LHCb muon detector has a total area of about 435 m2 and is divided into five stations with four regions of different granularity. The whole system is composed of 1380 Multi Wire Proportional Chambers (MWPCs). To obtain a good trigger performance each chamber should have an efficiency of approximately 99% which translates into a time resolution < 3.5 ns r.m.s.. To achieve the required efficiency and time resolution the muon chambers have to be operated at the lowest possible threshold. To minimize ageing effects at the same time the chambers have also to be operated at the lowest possible voltage. In order to optimize the operating parameters of the muon system it is of particular importance to have a profound understanding of the effects that limit the performance. This work describes a detailed study of all the resolution limiting parameters of the MWPCs in order to find out how they contribute to time resolution and therefor...

  9. The mass movement routing tool r.randomwalk and its functionalities for parameter sensitivity analysis and optimization

    Science.gov (United States)

    Krenn, Julia; Mergili, Martin

    2016-04-01

    r.randomwalk is a GIS-based, multi-functional conceptual tool for mass movement routing. Starting from one to many release points or release areas, mass points are routed down through the digital elevation model until a defined break criterion is reached. Break criteria are defined by the user and may consist in an angle of reach or a related parameter (empirical-statistical relationships), in the drop of the flow velocity to zero (two-parameter friction model), or in the exceedance of a maximum runup height. Multiple break criteria may be combined. A constrained random walk approach is applied for the routing procedure, where the slope and the perpetuation of the flow direction determine the probability of the flow to move in a certain direction. r.randomwalk is implemented as a raster module of the GRASS GIS software and, as such, is open source. It can be obtained from http://www.mergili.at/randomwalk.html. Besides other innovative functionalities, r.randomwalk serves with built-in functionalities for the derivation of an impact indicator index (III) map with values in the range 0-1. III is derived from multiple model runs with different combinations of input parameters varied in a random or controlled way. It represents the fraction of model runs predicting an impact at a given pixel and is evaluated against the observed impact area through an ROC Plot. The related tool r.ranger facilitates the automated generation and evaluation of many III maps from a variety of sets of parameter combinations. We employ r.randomwalk and r.ranger for parameter optimization and sensitivity analysis. Thereby we do not focus on parameter values, but - accounting for the uncertainty inherent in all parameters - on parameter ranges. In this sense, we demonstrate two strategies for parameter sensitivity analysis and optimization. We avoid to (i) use one-at-a-time parameter testing which would fail to account for interdependencies of the parameters, and (ii) to explore all possible

  10. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    Science.gov (United States)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  11. Computer controlled automated assay for comprehensive studies of enzyme kinetic parameters.

    Directory of Open Access Journals (Sweden)

    Felix Bonowski

    Full Text Available Stability and biological activity of proteins is highly dependent on their physicochemical environment. The development of realistic models of biological systems necessitates quantitative information on the response to changes of external conditions like pH, salinity and concentrations of substrates and allosteric modulators. Changes in just a few variable parameters rapidly lead to large numbers of experimental conditions, which go beyond the experimental capacity of most research groups. We implemented a computer-aided experimenting framework ("robot lab assistant" that allows us to parameterize abstract, human-readable descriptions of micro-plate based experiments with variable parameters and execute them on a conventional 8 channel liquid handling robot fitted with a sensitive plate reader. A set of newly developed R-packages translates the instructions into machine commands, executes them, collects the data and processes it without user-interaction. By combining script-driven experimental planning, execution and data-analysis, our system can react to experimental outcomes autonomously, allowing outcome-based iterative experimental strategies. The framework was applied in a response-surface model based iterative optimization of buffer conditions and investigation of substrate, allosteric effector, pH and salt dependent activity profiles of pyruvate kinase (PYK. A diprotic model of enzyme kinetics was used to model the combined effects of changing pH and substrate concentrations. The 8 parameters of the model could be estimated from a single two-hour experiment using nonlinear least-squares regression. The model with the estimated parameters successfully predicted pH and PEP dependence of initial reaction rates, while the PEP concentration dependent shift of optimal pH could only be reproduced with a set of manually tweaked parameters. Differences between model-predictions and experimental observations at low pH suggest additional protonation

  12. Optimal training dataset composition for SVM-based, age-independent, automated epileptic seizure detection.

    Science.gov (United States)

    Bogaarts, J G; Gommer, E D; Hilkman, D M W; van Kranen-Mastenbroek, V H J M; Reulen, J P H

    2016-08-01

    Automated seizure detection is a valuable asset to health professionals, which makes adequate treatment possible in order to minimize brain damage. Most research focuses on two separate aspects of automated seizure detection: EEG feature computation and classification methods. Little research has been published regarding optimal training dataset composition for patient-independent seizure detection. This paper evaluates the performance of classifiers trained on different datasets in order to determine the optimal dataset for use in classifier training for automated, age-independent, seizure detection. Three datasets are used to train a support vector machine (SVM) classifier: (1) EEG from neonatal patients, (2) EEG from adult patients and (3) EEG from both neonates and adults. To correct for baseline EEG feature differences among patients feature, normalization is essential. Usually dedicated detection systems are developed for either neonatal or adult patients. Normalization might allow for the development of a single seizure detection system for patients irrespective of their age. Two classifier versions are trained on all three datasets: one with feature normalization and one without. This gives us six different classifiers to evaluate using both the neonatal and adults test sets. As a performance measure, the area under the receiver operating characteristics curve (AUC) is used. With application of FBC, it resulted in performance values of 0.90 and 0.93 for neonatal and adult seizure detection, respectively. For neonatal seizure detection, the classifier trained on EEG from adult patients performed significantly worse compared to both the classifier trained on EEG data from neonatal patients and the classier trained on both neonatal and adult EEG data. For adult seizure detection, optimal performance was achieved by either the classifier trained on adult EEG data or the classifier trained on both neonatal and adult EEG data. Our results show that age

  13. Optimal Design of Variable Stiffness Composite Structures using Lamination Parameters

    NARCIS (Netherlands)

    IJsselmuiden, S.T.

    2011-01-01

    Fiber reinforced composite materials have gained widespread acceptance for a multitude of applications in the aerospace, automotive, maritime and wind-energy industries. Automated fiber placement technologies have developed rapidly over the past two decades, driven primarily by a need to reduce m

  14. An automated approach to magnetic divertor configuration design, using an efficient optimization methodology

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten; Reiter, Detlev [Institute of Energy and Climate Research (IEK-4), FZ Juelich GmbH, D-52425 Juelich (Germany); Heumann, Holger [Centre de Recherche INRIA Sophia Antipolis, BP 93 06902 Sophia Antipolis (France); Baelmans, Martine [KU Leuven, Department of Mechanical Engineering, 3001 Leuven (Belgium); Gauger, Nicolas Ralph [TU Kaiserslautern, Chair for Scientific Computing, 67663 Kaiserslautern (Germany)

    2015-05-01

    At present, several plasma boundary codes exist that attempt to describe the complex interactions in the divertor SOL (Scrape-Off Layer). The predictive capability of these edge codes is still very limited. Yet, in parallel to major efforts to mature edge codes, we face the design challenges for next step fusion devices. One of them is the design of the helium and heat exhaust system. In past automated design studies, results indicated large potential reductions in peak heat load by an increased magnetic flux divergence towards the target structures. In the present study, a free boundary magnetic equilibrium solver is included into the simulation chain to verify these tendencies. Additionally, we expanded the applicability of the automated design method by introducing advanced ''adjoint'' sensitivity computations. This method, inherited from airfoil shape optimization in aerodynamics, allows for a large number of design variables at no additional computational cost. Results are shown for a design application of the new WEST divertor.

  15. A parameter optimized approach for improving credit card fraud detection

    Directory of Open Access Journals (Sweden)

    Prakash

    2013-01-01

    Full Text Available The usage of credit cards has highly increased due to high-speed innovation in the electronic commerce technology. Since credit card turns out to be the majority well-liked manner of payment for mutually online as well as habitual purchase, cases of fraud correlated through it are as well increasing. In normal Hidden Markov Model the problem of cannot find an optimal state sequence for the underlying Markov process also this observed sequence cannot be viewed as training a model to best fit the observed data. In this research, the main aim is to model the sequence of observations in credit card transaction processing using an Advanced Hidden Markov Model (AHMM and show how it can be utilized for the exposure of frauds. In this process an AHMM is initially trained with the regular manners of a cardholder. If an incoming credit card transaction is not recognized by the trained AHMM with adequately high probability, it is believed to be fraudulent. This proposed work desire to regulate the model parameters to best fit the observations. The ranges of the matrices (N and M are fixed but the elements of A,B and #960; are to be decided, focus to the rank stochastic condition. The information that can efficiently re-estimate the model itself is one of the more incredible features of HMMs this referred here as AHMM.

  16. Inversion of generalized relaxation time distributions with optimized damping parameter

    Science.gov (United States)

    Florsch, Nicolas; Revil, André; Camerlynck, Christian

    2014-10-01

    Retrieving the Relaxation Time Distribution (RDT), the Grains Size Distribution (GSD) or the Pore Size Distribution (PSD) from low-frequency impedance spectra is a major goal in geophysics. The “Generalized RTD” generalizes parametric models like Cole-Cole and many others, but remains tricky to invert since this inverse problem is ill-posed. We propose to use generalized relaxation basis function (for instance by decomposing the spectra on basis of generalized Cole-Cole relaxation elements instead of the classical Debye basis) and to use the L-curve approach to optimize the damping parameter required to get smooth and realistic inverse solutions. We apply our algorithm to three examples, one synthetic and two real data sets, and the program includes the possibility of converting the RTD into GSD or PSD by choosing the value of the constant connecting the relaxation time to the characteristic polarization size of interest. A high frequencies (typically above 1 kHz), a dielectric term in taken into account in the model. The code is provided as an open Matlab source as a supplementary file associated with this paper.

  17. Application of RBF Neural Network in OptimizingMachining Parameters

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 吴博达; 武星星

    2004-01-01

    In machining processes, errors of rough in dimension, shape and location lead to changes in processing quantity, and the material of a workpiece may not be uniform. For these reasons, cutting force changes in machining, making the machining system deformable. Consequently errors in workpieces may occur. This is called the error reflection phenomenon. Generally, such errors can be reduced through repeated processing while using appropriate processing quantity in each processing based on operator's experience.According to the theory of error reflection, the error reflection coefficient indicates the extent to which errors of rough influence errors of workpieces. It is related to several factors such as machining condition, hardness of the workpiece, etc. This non-linear relation cannot be worked out using any formula. RBF neural network can approximate a non-linear function within any precision and be trained fast. In this paper, non-linear mapping ability of a fuzzy-neural network is utilized to approximate the non-linear relation. After training of the network with swatch collection obtained in experiments, an appropriate output can be obtained when an input is given. In this way, one can get the required number of processing and the processing quantity each time from the machining condition. Angular rigidity of a machining system,hardness of workpiece, etc., can be input in a form of fuzzy values. Feasibility in solving error reflection and optimizing machining parameters with a RBF neural network is verified by a simulation test with MATLAB.

  18. Parameter Studies, time-dependent simulations and design with automated Cartesian methods

    Science.gov (United States)

    Aftosmis, Michael

    2005-01-01

    Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.

  19. Numerical and experimental analysis of a ducted propeller designed by a fully automated optimization process under open water condition

    Science.gov (United States)

    Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa

    2015-10-01

    A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.

  20. Knowledge Network Driven Coordination and Robust Optimization to Support Concurrent and Collaborative Parameter Design

    OpenAIRE

    Hu, Jie; Peng, Yinghong; Xiong, Guangleng

    2007-01-01

    Abstract This study presents a parameter coordination and robust optimization approach based on knowledge network modeling. The method allows multidisciplinary designer to synthetically coordinate and optimize parameter considering multidisciplinary knowledge. First, a knowledge network model is established, including design knowledge from assembly, manufacture, performance, and simulation. Second, the parameter coordination method is presented to solve the knowledge network model,...

  1. Optimization of Laser Beam Transformation Hardening by One Single Parameter

    NARCIS (Netherlands)

    Meijer, J.; Sprang, van I.

    1991-01-01

    The process of laser beam transformation hardening is principally controlled by two independent parameters, the absorbed laser power on a given area and the interaction time. These parameters can be transformed into two functional parameters: the maximum surface temperature and the hardening depth.

  2. Optimization of operational aircraft parameters reducing noise emission

    OpenAIRE

    Abdallah, Lina; Khardi, Salah; Haddou, Mounir

    2010-01-01

    The objective of this paper is to develop a model and a minimization method to provide flight path optimums reducing aircraft noise in the vicinity of airports. Optimization algorithm has solved a complex optimal control problem, and generates flight paths minimizing aircraft noise levels. Operational and safety constraints have been considered and their limits satisfied. Results are here presented and discussed.

  3. Optimization of operational aircraft parameters Reducing Noise Emission

    CERN Document Server

    Abdallah, Lina; Khardi, Salah

    2008-01-01

    The objective of this paper is to develop a model and a minimization method to provide flight path optimums reducing aircraft noise in the vicinity of airports. Optimization algorithm has solved a complex optimal control problem, and generates flight paths minimizing aircraft noise levels. Operational and safety constraints have been considered and their limits satisfied. Results are here presented and discussed.

  4. Characterization and optimized control by means of multi-parameter controllers

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Carsten; Hoeg, S.; Thoegersen, A. (Dan-Ejendomme, Hellerup (Denmark)) (and others)

    2009-07-01

    Poorly functioning HVAC systems (Heating, Ventilation and Air Conditioning), but also separate heating, ventilation and air conditioning systems are costing the Danish society billions of kroner every year: partly because of increased energy consumption and high operational and maintenance costs, but mainly due to reduced productivity and absence due to illness because of a poor indoor climate. Typically, the operation of buildings and installations takes place today with traditional build-ing automation, which is characterised by 1) being based on static considerations 2) the individual sensor being coupled with one actuator/valve, i.e. the sensor's signal is only used in one place in the system 3) subsystems often being controlled independently of each other 4) the dynamics in building constructions and systems which is very important to the system and comfort regulation is not being considered. This, coupled with the widespread tendency to use large glass areas in the facades without sufficient sun shading, means that it is difficult to optimise comfort and energy consumption. Therefore, the last 10-20 years have seen a steady increase in the complaints of the indoor climate in Danish buildings and, at the same time, new buildings often turn out to be considerably higher energy consuming than expected. The purpose of the present project is to investigate the type of multi parameter sensors which may be generated for buildings and further to carry out a preliminary evaluation on how such multi parameter controllers may be utilized for optimal control of buildings. The aim of the project isn't to develop multi parameter controllers - this requires much more effort than possible in the present project. The aim is to show the potential of using multi parameter sensors when controlling buildings. For this purpose a larger office building has been chosen - an office building with at high energy demand and complaints regarding the indoor climate. In order to

  5. Optimization of non-linear mass damper parameters for transient response

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Lazarov, Boyan Stefanov

    2008-01-01

    We optimize the parameters of multiple non-linear mass dampers based on numerical simulation of transient wave propagation through a linear mass-spring carrier structure. Topology optimization is used to obtain optimized distributions of damper mass ratio, natural frequency, damping ratio and non...... nonlinear stiffness coefficient. Large improvements in performance is obtained with optimized parameters and it is shown that nonlinearmass dampers can bemore effective for wave attenuation than linear mass dampers....

  6. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  7. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    Science.gov (United States)

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art. PMID:27367139

  8. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    Science.gov (United States)

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.

  9. Optimization of structural parameters for spatial flexible redundant manipulators with maximum ratio of load to mass

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xu-ping; YU Yue-qing

    2005-01-01

    Optimization of structural parameters aimed at improving the load carrying capacity of spatial flexible redundant manipulators is presented in this paper. In order to increase the ratio of load to mass of robots, the cross-sectional parameters and constructional parameters are optimized respectively. The cross-sectional and configurational parameters are optimized simultaneously. The numerical simulation of a 4R spatial manipulator is performed. The results show that the load capacity of robots has been greatly improved through the optimization strategies proposed in this paper.

  10. A Filled Function with Adjustable Parameters for Unconstrained Global Optimization

    Institute of Scientific and Technical Information of China (English)

    SHANGYou-lin; LIXiao-yan

    2004-01-01

    A filled function with adjustable parameters is suggested in this paper for finding a global minimum point of a general class of nonlinear programming problems with a bounded and closed domain. This function has two adjustable parameters. We will discuss the properties of the proposed filled function. Conditions on this function and on the values of parameters are given so that the constructed function has the desired properties of traditional filled function.

  11. An automated optimization tool for high-dose-rate (HDR) prostate brachytherapy with divergent needle pattern

    Science.gov (United States)

    Borot de Battisti, M.; Maenhout, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; van Vulpen, M.; Moerland, M. A.

    2015-10-01

    Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm3to 23.3 cm3) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions.

  12. Optimizing Soil Hydraulic Parameters in RZWQM2 Under Fallow Conditions

    Science.gov (United States)

    Effective estimation of soil hydraulic parameters is essential for predicting soil water dynamics and related biochemical processes in agricultural systems. However, high uncertainties in estimated parameter values limit a model’s skill for prediction and application. In this study, a global search ...

  13. Automated synthesis of both the topology and numerical parameters for seven patented optical lens systems using genetic programming

    Science.gov (United States)

    Jones, Lee W.; Al-Sakran, Sameer H.; Koza, John R.

    2005-08-01

    This paper describes how genetic programming was used as an automated invention machine to synthesize both the topology and numerical parameters for seven previously patented optical lens systems, including one aspherical system and one issued in the 21st-century. Two of the evolved optical lens systems infringe the claims of the patents and the others are novel solutions that satisfy the design goals stated in the patent. The automatic synthesis was done "from scratch"--that is, without starting from a pre-existing good design and without pre-specifying the number of lenses, the topological layout of the lenses, or the numerical parameters of the lenses. Genetic programming is a form of evolutionary computation used to automatically solve problems. It starts from a high-level statement of what needs to be done and progressively breeds a population of candidate individuals over many generations using the principle of Darwinian natural selection and genetic recombination. The paper describes how genetic programming created eyepieces that duplicated the functionality of seven previously patented lens systems. The seven designs were created in a substantially similar and routine way, suggesting that the use of genetic programming in the automated design of both the topology and numerical parameters for optical lens systems may have widespread utility.

  14. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  15. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  16. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  17. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    Science.gov (United States)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  18. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  19. Computer-automated multi-disciplinary analysis and design optimization of internally cooled turbine blades

    Science.gov (United States)

    Martin, Thomas Joseph

    This dissertation presents the theoretical methodology, organizational strategy, conceptual demonstration and validation of a fully automated computer program for the multi-disciplinary analysis, inverse design and optimization of convectively cooled axial gas turbine blades and vanes. Parametric computer models of the three-dimensional cooled turbine blades and vanes were developed, including the automatic generation of discretized computational grids. Several new analysis programs were written and incorporated with existing computational tools to provide computer models of the engine cycle, aero-thermodynamics, heat conduction and thermofluid physics of the internally cooled turbine blades and vanes. A generalized information transfer protocol was developed to provide the automatic mapping of geometric and boundary condition data between the parametric design tool and the numerical analysis programs. A constrained hybrid optimization algorithm controlled the overall operation of the system and guided the multi-disciplinary internal turbine cooling design process towards the objectives and constraints of engine cycle performance, aerodynamic efficiency, cooling effectiveness and turbine blade and vane durability. Several boundary element computer programs were written to solve the steady-state non-linear heat conduction equation inside the internally cooled and thermal barrier-coated turbine blades and vanes. The boundary element method (BEM) did not require grid generation inside the internally cooled turbine blades and vanes, so the parametric model was very robust. Implicit differentiations of the BEM thermal and thereto-elastic analyses were done to compute design sensitivity derivatives faster and more accurately than via explicit finite differencing. A factor of three savings of computer processing time was realized for two-dimensional thermal optimization problems, and a factor of twenty was obtained for three-dimensional thermal optimization problems

  20. Parameter design and optimization of tight-lattice rod bundles

    International Nuclear Information System (INIS)

    Thin rod bundles with tight lattice are arranged according to the equilateral triangle grid, as the proportion of fuel is large, and the power density of core is high. Based on the analysis of the performance of core, the ABV-6M reactor is taken as the example, and two objective functions, power density and flow rate of coolant are proposed for optimization calculation. Diameter and pitch of rod are optimized by using GA method respectively. The results, which are considered to be safety in security checking, show that tight lattice is effective for improving the power density and other performances of the reactor core. (author)

  1. Parameter Optimization of Linear Quadratic Controller Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Jimin; SHANG Chaoxuan; ZOU Minghu

    2007-01-01

    The selection of weighting matrix in design of the linear quadratic optimal controller is an important topic in the control theory. In this paper, an approach based on genetic algorithm is presented for selecting the weighting matrix for the optimal controller. Genetic algorithm is adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this algorithm, the fitness function is used to evaluate individuals and reproductive success varies with fitness. In the design of the linear quadratic optimal controller, the fitness function has relation to the anticipated step response of the system. Not only can the controller designed by this approach meet the demand of the performance indexes of linear quadratic controller, but also satisfy the anticipated step response of close-loop system. The method possesses a higher calculating efficiency and provides technical support for the optimal controller in engineering application. The simulation of a three-order single-input single-output (SISO) system has demonstrated the feasibility and validity of the approach.

  2. Air Compressor Driving with Synchronous Motors at Optimal Parameters

    Directory of Open Access Journals (Sweden)

    Iuliu Petrica

    2010-10-01

    Full Text Available In this paper a method of optimal compensation of the reactive load by the synchronous motors, driving the air compressors, used in mining enterprises is presented, taking into account that in this case, the great majority of the equipment (compressors, pumps are generally working a constant load.

  3. Certain optimal parameters of high-velocity Venturi ejection tubes

    Science.gov (United States)

    Stark, S. B.; Reznichenko, I. G.; Pavlenko, Y. P.

    1984-11-01

    The influence of the geometrical characteristics of centrifugal nozzles in high velocity Venturi ejection tubes for atomizing liquid in gas cleaning plant is analyzed. An optimal value of the nozzle geometrical characteristic, which is a function of the degree of filling of the nozzle outlet opening by the liquid, is given, at which the throat length is independent of water pressure before the nozzle.

  4. AN IMPROVED GENETIC ALGORITHM FOR SEARCHING OPTIMAL PARAMETERS IN n-DIMENSIONAL SPACE

    Institute of Scientific and Technical Information of China (English)

    Tang Bin; Hu Guangrui

    2002-01-01

    An improved genetic algorithm for searching optimal parameters in n-dimensional space is presented, which encodes movement direction and distance and searches from coarse to precise. The algorithm can realize global optimization and improve the search efficiency, and can be applied effectively in industrial optimization, data mining and pattern recognition.

  5. AN IMPROVED GENETIC ALGORITHM FOR SEARCHING OPTIMAL PARAMETERS IN n—DIMENSIONAL SPACE

    Institute of Scientific and Technical Information of China (English)

    TangBin; HuGuangrui

    2002-01-01

    An improved genetic algorithm for searching optimal parameters in n-dimensional space is presented,which encodes movement direction and distance and searches from coarse to precise.The algorithm can realize global optimization and improve the search efficiency,and can be applied effectively in industrial optimization ,data mining and pattern recognition.

  6. Parameter selection of support vector machine for function approximation based on chaos optimization

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The support vector machine (SVM) is a novel machine learning method,which has the ability to approximate nonlinear functions with arbitrary accuracy.Setting parameters well is very crucial for SVM learning results and generalization ability,and now there is no systematic,general method for parameter selection.In this article,the SVM parameter selection for function approximation is regarded as a compound optimization problem and a mutative scale chaos optimization algorithm is employed to search for optimal parameter values.The chaos optimization algorithm is an effective way for global optimal and the mutative scale chaos algorithm could improve the search efficiency and accuracy.Several simulation examples show the sensitivity of the SVM parameters and demonstrate the superiority of this proposed method for nonlinear function approximation.

  7. Parameter Identification of Anaerobic Wastewater Treatment Bioprocesses Using Particle Swarm Optimization

    OpenAIRE

    Dorin Sendrescu

    2013-01-01

    This paper deals with the offline parameters identification for a class of wastewater treatment bioprocesses using particle swarm optimization (PSO) techniques. Particle swarm optimization is a relatively new heuristic method that has produced promising results for solving complex optimization problems. In this paper one uses some variants of the PSO algorithm for parameter estimation of an anaerobic wastewater treatment process that is a complex biotechnological system. The identification sc...

  8. Mechanical surface treatment of steel-Optimization parameters of regime

    Science.gov (United States)

    Laouar, L.; Hamadache, H.; Saad, S.; Bouchelaghem, A.; Mekhilef, S.

    2009-11-01

    Mechanical treatment process by superficial plastic deformation is employed for finished mechanical part surface. It introduces structural modifications that offer to basic material new properties witch give a high quality of physical and geometrical on superficial layers. This study focuses on the application of burnishing treatment (ball burnishing) on XC48 steel and parameters optimisation of treatment regime. Three important parameters were considered: burnishing force ' Py', burnishing feed 'f' and ball radius 'r'. An empirical model has been developed to illustrate the relationship between these parameters and superficial layer characteristics defined by surface roughness ' Ra' and superficial hardness ' Hv'. A program was developed in order to determine the optimum treatment regimes for each characteristic.

  9. Optimization design of resistance spot welding parameters of magnesium alloy

    Institute of Scientific and Technical Information of China (English)

    Lang Bo; Sun Daqian; Wu Qiong; Xuan Zhaozhi

    2008-01-01

    By means of the quadratic regression combination design process, the regression equations of nugget diameter and tensile shear load of spot welded joint were established. Effects of welding parameters on the nugget diameter and the tensile shear load were investigated. The results show that effect of welding current on nugget diameter is the most evident. And higher welding current will result in bigger nugget diameter. Besides, interaction effect of electrode force and welding current on tensile shear load is the most evident compared with others. The optimum welding parameters corresponding to the maximum of tensile shear load have been obtained by programming using Matlab software, which is 4.7kN electrode force, 28kA welding current and 4 cycle welding time. Under the condition of the optimum welding parameters, the joint having no visible defects can be obtained, nugget diameter and tensile shear load being 6.8mm and 3 256N, respectively.

  10. Application of optimal input synthesis to aircraft parameter identification

    Science.gov (United States)

    Gupta, N. K.; Hall, W. E., Jr.; Mehra, R. K.

    1976-01-01

    The Frequency Domain Input Synthesis procedure is used in identifying the stability and control derivatives of an aircraft. By using a frequency-domain approach, one can handle criteria that are not easily handled by the time-domain approaches. Numerical results are presented for optimal elevator deflections to estimate the longitudinal stability and control derivatives subject to root-mean square constraints on the input. The applicability of the steady state optimal inputs to finite duration flight testing is investigated. The steady state approximation of frequency-domain synthesis is good for data lengths greater than two time cycles for the short period mode of the aircraft longitudinal motions. Phase relationships between different frequency components become important for shorter data lengths. The frequency domain inputs are shown to be much better than the conventional doublet inputs.

  11. Optimization of electrical stimulation parameters for cardiac tissue engineering.

    Science.gov (United States)

    Tandon, Nina; Marsano, Anna; Maidhof, Robert; Wan, Leo; Park, Hyoungshin; Vunjak-Novakovic, Gordana

    2011-06-01

    In vitro application of pulsatile electrical stimulation to neonatal rat cardiomyocytes cultured on polymer scaffolds has been shown to improve the functional assembly of cells into contractile engineered cardiac tissues. However, to date, the conditions of electrical stimulation have not been optimized. We have systematically varied the electrode material, amplitude and frequency of stimulation to determine the conditions that are optimal for cardiac tissue engineering. Carbon electrodes, exhibiting the highest charge-injection capacity and producing cardiac tissues with the best structural and contractile properties, were thus used in tissue engineering studies. Engineered cardiac tissues stimulated at 3 V/cm amplitude and 3 Hz frequency had the highest tissue density, the highest concentrations of cardiac troponin-I and connexin-43 and the best-developed contractile behaviour. These findings contribute to defining bioreactor design specifications and electrical stimulation regime for cardiac tissue engineering.

  12. Optimization of control parameters for petroleum waste composting

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Composting is being widely employed in the treatment of petroleum waste. The purpose of this study was to find the optimum control parameters for petroleum waste in-vessel composting. Various physical and chemical parameters were monitored to evaluate their influence on the microbial communities present in composting. The CO2 evolution and the number of microorganisms were measured as theactivity of composting. The results demonstrated that the optimum temperature, pH and moisture content were 56.5-59.5, 7.0-8.5 and 55%-60%, respectively. Under the optimum conditions, the removal efficiency of petroleum hydrocarbon reached 83.29% after 30 days composting.

  13. Design of Digital Imaging System for Optimization of Control Parameters

    Institute of Scientific and Technical Information of China (English)

    SONG Yong; HAO Qun; YANG Guang; SUN Hong-wei

    2007-01-01

    The design of experimental system of digital imaging system for control parameter is discussed in detail. Signal processing of digital CCD imaging system is first analyzed. Then the real time control of CCD driver and digital processing circuit and man-machine interaction are achieved by the design of digital CCD imaging module and control module. Experimental results indicate that the image quality of CCD experimental system makes a good response to the change of control parameters. The system gives an important base for improving image quality and the applicability of micro imaging system in complex environment.

  14. Parameter estimation and optimal experimental design in flow reactors

    OpenAIRE

    Carraro, Thomas

    2005-01-01

    In this work we present numerical techniques, based on the finite element method, for the simulation of reactive flows in a chemical flow reactor as well as for the identification of the kinetic of the reactions using measurements of observable quantities. We present the case of a real experiment in which the reaction rate is estimated by means of concentration measurements. We introduce methods for the optimal experimental design of experiments in the context of reactive flows modeled by par...

  15. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    Science.gov (United States)

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  16. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  17. Design optimization on the front wheel orientation parameters of a vehicle

    Institute of Scientific and Technical Information of China (English)

    CHU Zhigang; DENG Zhaoxiang; HU Yumei; ZHU Ming

    2003-01-01

    A uniform optimization object function for front wheel orientation parameters of a vehicle is reported, which includes the tolerances of practical values and set values of front wheel orientation parameters under full load, and the changing value of each parameter with front wheel fluctuation to build a front suspension model for optimization analysis based on the multi-body dynamic (MD) theory. The original suspension is optimized with this model, and the variation law of each parameter with front wheel fluctuation is obtained. The results of a case study demonstrate that the front wheel orientation parameters of the optimized vehicle are reasonable under typical conditions and the variation of each parameter is in an ideal range with the wheel fluctuating within ±40 mm. In addition, the driving performance is improved greatly in the road test and practical use.

  18. A new ensemble algorithm of differential evolution and backtracking search optimization algorithm with adaptive control parameter for function optimization

    Directory of Open Access Journals (Sweden)

    Sukanta Nama

    2016-04-01

    Full Text Available Differential evolution (DE is an effective and powerful approach and it has been widely used in different environments. However, the performance of DE is sensitive to the choice of control parameters. Thus, to obtain optimal performance, time-consuming parameter tuning is necessary. Backtracking Search Optimization Algorithm (BSA is a new evolutionary algorithm (EA for solving real-valued numerical optimization problems. An ensemble algorithm called E-BSADE is proposed which incorporates concepts from DE and BSA. The performance of E-BSADE is evaluated on several benchmark functions and is compared with basic DE, BSA and conventional DE mutation strategy.

  19. Optimal Parameter and Uncertainty Estimation of a Land Surface Model: Sensitivity to Parameter Ranges and Model Complexities

    Institute of Scientific and Technical Information of China (English)

    Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  20. Optimal Two Parameter Bounds for the Seiffert Mean

    Directory of Open Access Journals (Sweden)

    Hui Sun

    2013-01-01

    Full Text Available We obtain sharp bounds for the Seiffert mean in terms of a two parameter family of means. Our results generalize and extend the recent bounds presented in the Journal of Inequalities and Applications (2012 and Abstract and Applied Analysis (2012.

  1. An Effect and Analysis of Parameter on Ant Colony Optimization for Solving Travelling Salesman Problem

    OpenAIRE

    Km. Shweta; Alka Singh

    2013-01-01

    Ant Colony optimization has proved suitable to solve a wide range of combinatorial optimization(or NP-hard) problems as the Travelling Salesman Problem (TSP). The first step of ACO algorithm is to setthe parameters that drive the algorithm. The parameter has an important impact on the performance of theant colony algorithm. The basic parameters that are used in ACO algorithms are; the relative importance (orweight) of pheromone, the relative importance of heuristics value, initial pheromone v...

  2. The ESO-LV project - Automated parameter extraction for 16000 ESO/Uppsala galaxies

    NARCIS (Netherlands)

    Lauberts, Andris; Valentijn, Edwin A.

    1987-01-01

    A program to extract photometric and morphological parameters of the galaxies in the ESO/Uppsala survey (Lauberts and Valentijn, 1982) is discussed. The completeness and accuracy of the survey are evaluated and compared with other surveys. The parameters obtained in the program are listed.

  3. Air conditioning with methane: Efficiency and economics optimization parameters

    International Nuclear Information System (INIS)

    This paper presents an efficiency and economics evaluation method for methane fired cooling systems. Focus is on direct flame two staged absorption systems and alternative engine driven compressor sets. Comparisons are made with conventional vapour compression plants powered by electricity supplied by the national grid. A first and second law based thermodynamics analysis is made in which fuel use coefficients and exergy yields are determined. The economics analysis establishes annual energy savings, unit cooling energy production costs, payback periods and economics/efficiency optimization curves useful for preliminary feasibility studies

  4. Optimization of process parameters for osmotic dehydration of papaya cubes

    OpenAIRE

    S.K. Jain; R. C. Verma; Murdia, L. K.; Jain, H. K.; Sharma, G. P.

    2010-01-01

    Process temperature (30, 40 and 50 °C), syrup concentration (50, 60 and 70o Brix) and process time (4, 5 and 6 h) for osmotic dehydration of papaya (Carica papaya) cubes were optimized for the maximum water loss and optimum sugar gain by using response surface methodology. The peeled and pre-processed papaya cubes of 1 cm size were immersed in sugar syrup at constant temperature water bath having syrup to papaya cubes ratio of 4:1 (w/w). The cubes were removed from bath at pre-decided time, r...

  5. Optimization of Process Parameters of Tool Wear in Turning Operation

    Directory of Open Access Journals (Sweden)

    Manik Barman

    2015-04-01

    Full Text Available Tool Wear is of great apprehension in machining industries since itaffects the surface quality, dimensional accuracy and production cost of the materials / components. In the present study twenty seven experiments were conducted as per 3 parameter 3 level full factorial design for turning operation of a mild steel specimen with high speed steel (HSS cutting tool. An experimental investigation on cutting tool wear and a mathematical model for tool wear estimation is reported in this paper where the model was simulated by computer programming and it has been found that this model is capable of estimating the wear rate of cutting tool and it provides an optimum set of process parameters for minimum tool wear.

  6. Optimal Parameter Tuning in a Predictive Nonlinear Control Method for a Mobile Robot

    Directory of Open Access Journals (Sweden)

    D. Hazry

    2006-01-01

    Full Text Available This study contributes to a new optimal parameter tuning in a predictive nonlinear control method for stable trajectory straight line tracking with a non-holonomic mobile robot. In this method, the focus lies in finding the optimal parameter estimation and to predict the path that the mobile robot will follow for stable trajectory straight line tracking system. The stability control contains three parameters: 1 deflection parameter for the traveling direction of the mobile robot 2 deflection parameter for the distance across traveling direction of the mobile robot and 3 deflection parameter for the steering angle of the mobile robot . Two hundred and seventy three experimental were performed and the results have been analyzed and described herewith. It is found that by using a new optimal parameter tuning in a predictive nonlinear control method derived from the extension of kinematics model, the movement of the mobile robot is stabilized and adhered to the reference posture

  7. On optimal detection and estimation of the FCN parameters

    Science.gov (United States)

    Yatskiv, Y.

    2009-09-01

    Statistical approach for detection and estimation of parameters of short-term quasi- periodic processes was used in order to investigate the Free Core Nutation (FCN) signal in the Celestial Pole Offset (CPO). The results show that this signal is very unstable and that it disappeared in year 2000. The amplitude of oscillation with period of about 435 days is larger for dX as compared with that for dY .

  8. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-14

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  9. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  10. Moving Toward an Optimal and Automated Geospatial Network for CCUS Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, Brendan Arthur [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-05

    Modifications in the global climate are being driven by the anthropogenic release of greenhouse gases (GHG) including carbon dioxide (CO2) (Middleton et al. 2014). CO2 emissions have, for example, been directly linked to an increase in total global temperature (Seneviratne et al. 2016). Strategies that limit CO2 emissions—like CO2 capture, utilization, and storage (CCUS) technology—can greatly reduce emissions by capturing CO2 before it is released to the atmosphere. However, to date CCUS technology has not been developed at a large commercial scale despite several promising high profile demonstration projects (Middleton et al. 2015). Current CCUS research has often focused on capturing CO2 emissions from coal-fired power plants, but recent research at Los Alamos National Laboratory (LANL) suggests focusing CCUS CO2 capture research upon industrial sources might better encourage CCUS deployment. To further promote industrial CCUS deployment, this project builds off current LANL research by continuing the development of a software tool called SimCCS, which estimates a regional system of transport to inject CO2 into sedimentary basins. The goal of SimCCS, which was first developed by Middleton and Bielicki (2009), is to output an automated and optimal geospatial industrial CCUS pipeline that accounts for industrial source and sink locations by estimating a Delaunay triangle network which also minimizes topographic and social costs (Middleton and Bielicki 2009). Current development of SimCCS is focused on creating a new version that accounts for spatial arrangements that were not available in the previous version. This project specifically addresses the issue of non-unique Delaunay triangles by adding additional triangles to the network, which can affect how the CCUS network is calculated.

  11. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  12. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  13. The Study of the Optimal Parameter Settings in a Hospital Supply Chain System in Taiwan

    Directory of Open Access Journals (Sweden)

    Hung-Chang Liao

    2014-01-01

    Full Text Available This study proposed the optimal parameter settings for the hospital supply chain system (HSCS when either the total system cost (TSC or patient safety level (PSL (or both simultaneously was considered as the measure of the HSCS’s performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM was used to construct the regression model, and a genetic algorithm (GA was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers.

  14. The study of the optimal parameter settings in a hospital supply chain system in Taiwan.

    Science.gov (United States)

    Liao, Hung-Chang; Chen, Meng-Hao; Wang, Ya-huei

    2014-01-01

    This study proposed the optimal parameter settings for the hospital supply chain system (HSCS) when either the total system cost (TSC) or patient safety level (PSL) (or both simultaneously) was considered as the measure of the HSCS's performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM) was used to construct the regression model, and a genetic algorithm (GA) was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers. PMID:25250397

  15. Automated Modal Parameter Estimation for Operational Modal Analysis of Large Systems

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice;

    2007-01-01

    In this paper the problems of doing automatic modal parameter extraction and how to account for large number of data to process are considered. Two different approaches for obtaining the modal parameters automatically using OMA are presented: The Frequency Domain Decomposition (FDD) technique and...... and a correlation-driven Stochastic Subspace Identification (SSI) technique. Special attention is given to the problem of data reduction, where many sensors are available. Finally, the techniques are demonstrated on real data....

  16. Optimal measurement locations for parameter estimation of non linear distributed parameter systems

    Directory of Open Access Journals (Sweden)

    J. E. Alaña

    2010-12-01

    Full Text Available A sensor placement approach for the purpose of accurately estimating unknown parameters of a distributed parameter system is discussed. The idea is to convert the sensor location problem to a classical experimental design. The technique consists of analysing the extrema values of the sensitivity coefficients derived from the system and their corresponding spatial positions. This information is used to formulate an efficient computational optimum experiment design on discrete domains. The scheme studied is verified by a numerical example regarding the chemical reaction in a tubular reactor for two possible scenarios; stable and unstable operation conditions. The resulting approach is easy to implement and good estimates for the parameters of the system are obtained. This study shows that the measurement location plays an essential role in the parameter estimation procedure.

  17. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA.

    Science.gov (United States)

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-07-14

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account. PMID:27421397

  18. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    Science.gov (United States)

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-07-01

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  19. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  20. Hybridization of Response Surface Methodology and Genetic Algorithm optimization for CO2 laser cutting parameter on AA6061 material

    Directory of Open Access Journals (Sweden)

    A.Parthiban

    2014-03-01

    Full Text Available Investigation of laser cutting parameters on aluminium alloy (AA6061 is important due to its high reflectivity and thermal conductivity. Generally Aluminium alloy is a widely used material in aeronautical and automation industries for its inherent properties. Although the main problem during laser cutting is occurrence of recasting layer and laser beam incidence that affecting the cutting quality is known as kerf dimensions. In a sense the relationship between the laser cutting parameters such as laser power, cutting speed, gas pressure and focal position with kerf dimensions are having important role in laser cutting operation. So this work considers the response surface methodology (RSM, for making empirical relationship between dependent and independent variables. Simultaneously, this work reveals that laser power, cutting speed, gas pressure and focal position have significant effects on kerf dimension. Thus the development of empirical model and the selection of best parameters are important for manufacturing industries. Hence this work develops the statistical model with RSM and optimizes the cutting parameters with genetic algorithm (GA.

  1. Parameter Identification of Anaerobic Wastewater Treatment Bioprocesses Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Dorin Sendrescu

    2013-01-01

    Full Text Available This paper deals with the offline parameters identification for a class of wastewater treatment bioprocesses using particle swarm optimization (PSO techniques. Particle swarm optimization is a relatively new heuristic method that has produced promising results for solving complex optimization problems. In this paper one uses some variants of the PSO algorithm for parameter estimation of an anaerobic wastewater treatment process that is a complex biotechnological system. The identification scheme is based on a multimodal numerical optimization problem with high dimension. The performances of the method are analyzed by numerical simulations.

  2. Optimal choice of trapezoidal shaping parameters in digital nuclear spectrometer system

    International Nuclear Information System (INIS)

    Trapezoidal shaping method is widely applied to pulse amplitude extraction in digital nuclear spectrometer system, the optimal selection of the shaping parameters can improve the energy resolution and pulse counting rate. From the view of noise characteristics, ballistic deficit compensation characteristics and pulse pile-up characteristics, in this paper the optimal selection of the trapezoidal shaping parameters is studied on. According to the theoretical analysis and experimental verification, the optimal choice of trapezoidal shaping parameters is similar to the triangle, the rise time is longer and the flat-top width is shorter. (authors)

  3. Measuring Digital PCR Quality: Performance Parameters and Their Optimization.

    Science.gov (United States)

    Lievens, A; Jacchia, S; Kagkli, D; Savini, C; Querci, M

    2016-01-01

    Digital PCR is rapidly being adopted in the field of DNA-based food analysis. The direct, absolute quantification it offers makes it an attractive technology for routine analysis of food and feed samples for their composition, possible GMO content, and compliance with labelling requirements. However, assessing the performance of dPCR assays is not yet well established. This article introduces three straightforward parameters based on statistical principles that allow users to evaluate if their assays are robust. In addition, we present post-run evaluation criteria to check if quantification was accurate. Finally, we evaluate the usefulness of Poisson confidence intervals and present an alternative strategy to better capture the variability in the analytical chain.

  4. Measuring Digital PCR Quality: Performance Parameters and Their Optimization.

    Directory of Open Access Journals (Sweden)

    A Lievens

    Full Text Available Digital PCR is rapidly being adopted in the field of DNA-based food analysis. The direct, absolute quantification it offers makes it an attractive technology for routine analysis of food and feed samples for their composition, possible GMO content, and compliance with labelling requirements. However, assessing the performance of dPCR assays is not yet well established. This article introduces three straightforward parameters based on statistical principles that allow users to evaluate if their assays are robust. In addition, we present post-run evaluation criteria to check if quantification was accurate. Finally, we evaluate the usefulness of Poisson confidence intervals and present an alternative strategy to better capture the variability in the analytical chain.

  5. Measuring Digital PCR Quality: Performance Parameters and Their Optimization

    Science.gov (United States)

    Lievens, A.; Jacchia, S.; Kagkli, D.; Savini, C.; Querci, M.

    2016-01-01

    Digital PCR is rapidly being adopted in the field of DNA-based food analysis. The direct, absolute quantification it offers makes it an attractive technology for routine analysis of food and feed samples for their composition, possible GMO content, and compliance with labelling requirements. However, assessing the performance of dPCR assays is not yet well established. This article introduces three straightforward parameters based on statistical principles that allow users to evaluate if their assays are robust. In addition, we present post-run evaluation criteria to check if quantification was accurate. Finally, we evaluate the usefulness of Poisson confidence intervals and present an alternative strategy to better capture the variability in the analytical chain. PMID:27149415

  6. Measuring Digital PCR Quality: Performance Parameters and Their Optimization.

    Science.gov (United States)

    Lievens, A; Jacchia, S; Kagkli, D; Savini, C; Querci, M

    2016-01-01

    Digital PCR is rapidly being adopted in the field of DNA-based food analysis. The direct, absolute quantification it offers makes it an attractive technology for routine analysis of food and feed samples for their composition, possible GMO content, and compliance with labelling requirements. However, assessing the performance of dPCR assays is not yet well established. This article introduces three straightforward parameters based on statistical principles that allow users to evaluate if their assays are robust. In addition, we present post-run evaluation criteria to check if quantification was accurate. Finally, we evaluate the usefulness of Poisson confidence intervals and present an alternative strategy to better capture the variability in the analytical chain. PMID:27149415

  7. Optimal FES parameters based on mechanomyographic efficiency index.

    Science.gov (United States)

    Krueger-Beck, Eddy; Scheeren, Eduardo M; Nogueira-Neto, Guilherme N; Button, Vera Lucia S N; Nohama, Percy

    2010-01-01

    Functional electrical stimulation (FES) can artificially elicit movements in spinal cord injured (SCI) subjects. FES control strategies involve monitoring muscle features and setting FES profiles so as to postpone the installation of muscle fatigue or nerve cell adaptation. Mechanomyography (MMG) sensors register the lateral oscillations of contracting muscles. This paper presents an MMG efficiency index (EI) that may indicate most efficient FES electrical parameters to control functional movements. Ten healthy and three SCI volunteers participated in the study. Four FES profiles with two FES sessions were applied with in-between 15min rest interval. MMG RMS and median frequency were inserted into the EI equation. EI increased along the test. FES profile set to 1kHz pulse frequency, 200εs active pulse duration and burst frequency of 50Hz was the most efficient.

  8. Enhancing parameter precision of optimal quantum estimation by quantum screening

    Science.gov (United States)

    Jiang, Huang; You-Neng, Guo; Qin, Xie

    2016-02-01

    We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).

  9. 1000 MW ultra-supercritical turbine steam parameter optimization

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The 2 ×1000 MW ultra-supercritical steam turbine of Shanghai Waigaoqiao Phase Ⅲ project,which uses grid frequency regulation and overload control through an overload valve,is manufactured by Shanghai Turbine Company using Siemens technology.Through optimization,the steam pressure is regarded as the criterion between constant pressure and sliding pressure operation.At high circulating water temperature,the turbine overload valve is kept closed when the unit load is lower than 1000 MW while at other circulating water temperatures the turbine can run in sliding pressure operation when the unit load is higher than 1000 MW and the pressure is lower than 27 MPa This increases the unit operation efficiency.The 3D bending technology in the critical piping helps to reduce the project investment and minimize the reheat system pressure drop which improves the unit operation efficiency and safety.By choosing lower circulating water design temperature and by setting the individual Boiler Feedwater Turbine condenser to reduce the exhaust steam flow and the heat load to the main condenser,the unit average back pressure and the terminal temperature difference are minimized.Therefore,the unit heat efficiency is increased.

  10. Critical parameters for optimal biomass refineries: the case of biohydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Koukios, Emmanuel; Koullas, Dimitrios; Koukios, Irene Daouti; Avgerinos, Emmanouil [National Technical University of Athens, Bioresource Technology Unit, School of Chemical Engineering, Athens (Greece)

    2010-04-15

    The object of this paper is to identify and assess the elements taken from agro-industries and fossil hydrocarbon refineries, especially with respect to biomass logistics, fractionation kinetics, and process energetics. Such critical information will be of immediate use by policy and decision makers, especially in the early phase of planning and designing the first generation of biorefineries. Concerning feedstock logistics, biorefineries have a lot to learn from food and wood supply chains. This learning could lead to the deployment of complex, decentralised, stage-wise biorefining systems, consisting of local agro-refineries, regional biorefineries, where the primary plant fractions are processed and upgraded to useful intermediates, and central bioconversion units for the generation of market-grade biofuels, such as biohydrogen and other high value-added vectors. The kinetic aspects of biorefineries are related to the physico-chemical nature of the macromolecules. Finally, to solve the problem of the non-optimal energy transformations a tailored-up bioenergy plan is proposed for each biorefinery. The example of a wheat bran-based biorefinery, aiming at the production of biohydrogen will be used to illustrate the way ahead. (orig.)

  11. Model parameters estimation of aero-engine based on hybrid optimization algorithm%Model parameters estimation of aero-engine based on hybrid optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Qiu-hong; LI Ye-bo; JIANG Dian-wen

    2011-01-01

    A hybrid optimization algorithm for the time-domain identification of multivariable,state space model for aero-engine was presented in this paper.The optimization procedure runs particle swarm optimization(PSO) and least squares optimization(LSO) "in series".PSO starts from an initial population and searches for the optimum solution by updating generations.However,it can sometimes run into a suboptimal solution.Then LSO can start from the suboptimal solution of PSO,and get an optimum solution by conjugate gradient algorithm.The algorithm is suitable for the high-order multivariable system which has many parameters to be estimated in wide ranges.Hybrid optimization algorithm is applied to estimate the parameters of a 4-input 4-output state variable model(SVM) for aero-engine.The simulation results demonstrate the effectiveness of the proposed algorithm.

  12. Rethinking design parameters in the search for optimal dynamic seating.

    Science.gov (United States)

    Pynt, Jennifer

    2015-04-01

    Dynamic seating design purports to lessen damage incurred during sedentary occupations by increasing sitter movement while modifying muscle activity. Dynamic sitting is currently defined by O'Sullivan et al. ( 2013a) as relating to 'the increased motion in sitting which is facilitated by the use of specific chairs or equipment' (p. 628). Yet the evidence is conflicting that dynamic seating creates variation in the sitter's lumbar posture or muscle activity with the overall consensus being that current dynamic seating design fails to fulfill its goals. Research is needed to determine if a new generation of chairs requiring active sitter involvement fulfills the goals of dynamic seating and aids cardio/metabolic health. This paper summarises the pursuit of knowledge regarding optimal seated spinal posture and seating design. Four new forms of dynamic seating encouraging active sitting are discussed. These are 1) The Core-flex with a split seatpan to facilitate a walking action while seated 2) the Duo balans requiring body action to create rocking 3) the Back App and 4) Locus pedestal stools both using the sitter's legs to drive movement. Unsubstantiated claims made by the designers of these new forms of dynamic seating are outlined. Avenues of research are suggested to validate designer claims and investigate whether these designs fulfill the goals of dynamic seating and assist cardio/metabolic health. Should these claims be efficacious then a new definition of dynamic sitting is suggested; 'Sitting in which the action is provided by the sitter, while the dynamic mechanism of the chair accommodates that action'. PMID:25892386

  13. CH4 parameter estimation in CLM4.5bgc using surrogate global optimization

    Science.gov (United States)

    Müller, J.; Paudel, R.; Shoemaker, C. A.; Woodbury, J.; Wang, Y.; Mahowald, N.

    2015-10-01

    Over the anthropocene methane has increased dramatically. Wetlands are one of the major sources of methane to the atmosphere, but the role of changes in wetland emissions is not well understood. The Community Land Model (CLM) of the Community Earth System Models contains a module to estimate methane emissions from natural wetlands and rice paddies. Our comparison of CH4 emission observations at 16 sites around the planet reveals, however, that there are large discrepancies between the CLM predictions and the observations. The goal of our study is to adjust the model parameters in order to minimize the root mean squared error (RMSE) between model predictions and observations. These parameters have been selected based on a sensitivity analysis. Because of the cost associated with running the CLM simulation (15 to 30 min on the Yellowstone Supercomputing Facility), only relatively few simulations can be allowed in order to find a near-optimal solution within an acceptable time. Our results indicate that the parameter estimation problem has multiple local minima. Hence, we use a computationally efficient global optimization algorithm that uses a radial basis function (RBF) surrogate model to approximate the objective function. We use the information from the RBF to select parameter values that are most promising with respect to improving the objective function value. We show with pseudo data that our optimization algorithm is able to make excellent progress with respect to decreasing the RMSE. Using the true CH4 emission observations for optimizing the parameters, we are able to significantly reduce the overall RMSE between observations and model predictions by about 50 %. The methane emission predictions of the CLM using the optimized parameters agree better with the observed methane emission data in northern and tropical latitudes. With the optimized parameters, the methane emission predictions are higher in northern latitudes than when the default parameters are

  14. Optimal Control of Distributed Parameter Systems with Application to Transient Thermoelectric Cooling

    Directory of Open Access Journals (Sweden)

    KOTSUR, M.

    2015-05-01

    Full Text Available We give a solution of optimal control problem for distributed parameter systems described by nonlinear partial differential equations with nonstandard boundary conditions. The variational method is used to obtain the general form of the necessary conditions of optimality. A suitable algorithm based on the numerical method of successive approximations has been constructed for computing the optimal control functions. The results are applied for optimization of transient thermoelectric cooling process. Optimal dependences of current on time have been calculated for thermoelectric cooler power supply with the purpose of minimizing the cooling temperature within a preset time interval.

  15. User's manual for an aerodynamic optimization scheeme that updates flow variables and design parameters simultaneously

    Science.gov (United States)

    Rizk, Magdi H.

    1988-01-01

    This user's manual is presented for an aerodynamic optimization program that updates flow variables and design parameters simultaneously. The program was developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The program was tested by applying it to the problem of optimizing propeller designs. Some reference to this particular application is therefore made in the manual. However, the optimization scheme is suitable for application to general aerodynamic design problems. A description of the approach used in the optimization scheme is first presented, followed by a description of the use of the program.

  16. An Improved Bees Algorithm for Real Parameter Optimization

    Directory of Open Access Journals (Sweden)

    Wasim A. Hussein

    2015-10-01

    Full Text Available The Bees Algorithm (BA is a bee swarm-based search algorithm inspired by the foraging behavior of a swarm of honeybees. BA can be divided into four parts: the parameter tuning part, the initialization part, the local search part, and the global search part. Recently, BA based on Patch-Levy-based Initialization Algorithm (PLIA-BA has been proposed. However, the initial stage remains an initial step, and its improvement is not enough for more challenging problem classes with different properties. The local and global search capabilities are also required to be enhanced to improve the quality of final solution and the convergence speed of PLIA-BA on such problems. Consequently, in this paper, a new local search algorithm has been adopted based on the Levy looping flights. Moreover, the mechanism of the global search has been enhanced to be closer to nature and based on the patch-Levy model adopted in the initialization algorithm (PLIA. The improvements in local and global search parts are incorporated into PLIA-BA to advise a new version of BA that is called Patch-Levy-based Bees Algorithm (PLBA. We investigate the performance of the proposed PLBA on a set of challenging benchmark functions. The results of the experiments indicate that PLBA significantly outperforms the other BA variants, including PLIA-BA and can produce comparable results with other state-of-the-art algorithms.

  17. Parameter optimization of controllable local degree of freedom for reducing vibration of flexible manipulator

    Institute of Scientific and Technical Information of China (English)

    Bian Yushu; Gao Zhihui

    2013-01-01

    Parameter optimization of the controllable local degree of freedom is studied for reducing vibration of the flexible manipulator at the lowest possible cost.The controllable local degrees of freedom are suggested and introduced to the topological structure of the flexible manipulator,and used as an effective way to alleviate vibration through dynamic coupling.Parameters introduced by the controllable local degrees of freedom are analyzed and their influences on vibration reduction are investigated.A strategy to optimize these parameters is put forward and the corresponding optimization method is suggested based on Particle Swarm Optimization (PSO).Simulations are conducted and results of case studies confirm that the proposed optimization method is effective in reducing vibration of the flexible manipulator at the lowest possible cost.

  18. Automated estimation of stellar fundamental parameters from low resolution spectra: the PLS method

    Institute of Scientific and Technical Information of China (English)

    Jian-Nan Zhang; A-Li Luo; Yong-Heng Zhao

    2009-01-01

    PLS (Partial Least Squares regression) is introduced into an automatic esti-mation of fundamental stellar spectral parameters. It extracts the most correlative spec-tral component to the parameters (Teff, log g and [Fe/H]), and sets up a linear regres-sion function from spectra to the corresponding parameters. Considering the properties of stellar spectra and the PLS algorithm, we present a piecewise PLS regression method for estimation of stellar parameters, which is composed of one PLS model for Teff, and seven PLS models for log g and [Fe/H] estimation. Its performance is investigated by large experiments on flux calibrated spectra and continuum normalized spectra at dif-ferent signal-to-noise ratios (SNRs) and resolutions. The results show that the piecewise PLS method is robust for spectra at the medium resolution of 0.23 nm. For low resolu-tion 0.5 nm and 1 nm spectra, it achieves competitive results at higher SNR. Experiments using ELODIE spectra of 0.23 nm resolution illustrate that our piecewise PLS models trained with MILES spectra are efficient for O ~ G stars: for flux calibrated spectra, the systematic offsets are 3.8%, 0.14dex, and -0.09 dex for Teff, log g and [Fe/H], with error scatters of 5.2%, 0.44 dex and 0.38 dex, respectively; for continuum normalized spectra, the systematic offsets are 3.8%, 0.12dex, and -0.13 dex for Teff, log g and [Fe/H], with error scatters of 5.2%, 0.49 dex and 0.41 dex, respectively. The PLS method is rapid, easy to use and does not rely as strongly on the tightness of a parameter grid of templates to reach high precision as Artificial Neural Networks or minimum distance methods do.

  19. THE PARAMETER OPTIMIZATION MODEL OF INVESTMENT AND CONSTRUCTION PROJECTS AND MANAGERIAL FEASIBILITY OF THEIR BEHAVIOR

    Directory of Open Access Journals (Sweden)

    P. Ye. Uvarov

    2009-09-01

    Full Text Available In the article the basic problem of substantiation of parameters of optimization model of organizationaltechnological solutions for investment-building projects in the system of project management is considered.

  20. Automation of reverse engineering process in aircraft modeling and related optimization problems

    Science.gov (United States)

    Li, W.; Swetits, J.

    1994-01-01

    During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for

  1. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  2. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  3. Extended Application of the Conditional Nonlinear Optimal Parameter Perturbation Method in the Common Land Model

    Institute of Scientific and Technical Information of China (English)

    WANG Bo; HUO Zhenhua

    2013-01-01

    An extension of the conditional nonlinear optimal parameter perturbation (CNOP-P) method is applied to the parameter optimization of the Common Land Model (CoLM) for the North China Plain with the differential evolution (DE) method.Using National Meteorological Center (NMC) Reanalysis 6-hourly surface flux data and National Center for Environmental Prediction/Department of Energy (NCEP/DOE)Atmospheric Model Intercomparison Project II (AMIP-II) 6-hourly Reanalysis Gaussian Grid data,two experiments (I and II) were designed to investigate the impact of the percentages of sand and clay in the shallow soil in CoLM on its ability to simulate shallow soil moisture.A third experiment (III) was designed to study the shallow soil moisture and latent heat flux simultaneously.In all the three experiments,after the optimization stage,the percentages of sand and clay of the shallow soil were used to predict the shallow soil moisture in the following month.The results show that the optimal parameters can enable CoLM to better simulate shallow soil moisture,with the simulation results of CoLM after the double-parameter optimal experiment being better than the single-parameter optimal experiment in the optimization slot.Furthermore,the optimal parameters were able to significantly improve the prediction results of CoLM at the prediction stage.In addition,whether or not the atmospheric forcing and observational data are accurate can seriously affect the results of optimization,and the more accurate the data are,the more significant the results of optimization may be.

  4. An adaptive image denoising method based on local parameters optimization

    Indian Academy of Sciences (India)

    Hari Om; Mantosh Biswas

    2014-08-01

    In image denoising algorithms, the noise is handled by either modifying term-by-term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the pixel to be thresholded (denoised). The threshold is determined in terms of the noise variance present in the image and its size. The VisuShrink, SureShrink, and NeighShrink methods are important denoising methods that provide good results. The first two, i.e., VisuShrink and SureShrink methods follow term-by-term approach, i.e., modify the individual pixel and the third one, i.e., NeighShrink and its variants: ModiNeighShrink, IIDMWD, and IAWDMBMC, follow block-by-block approach, i.e., modify the pixels in groups, in order to remove the noise. The VisuShrink, SureShrink, and NeighShrink methods however do not give very good visual quality because they remove too many coefficients due to their high threshold values. In this paper, we propose an image denoising method that uses the local parameters of the neighbouring coefficients of the pixel to be denoised in the noisy image. In our method, we propose two new shrinkage factors and the threshold at each decomposition level, which lead to better visual quality. We also establish the relationship between both the shrinkage factors. We compare the performance of our method with that of the VisuShrink and NeighShrink including various variants. Simulation results show that our proposed method has high peak signal-to-noise ratio and good visual quality of the image as compared to the traditional methods:Weiner filter, VisuShrink, SureShrink, NeighBlock, NeighShrink, ModiNeighShrink, LAWML, IIDMWT, and IAWDMBNC methods.

  5. Mechanism Analysis and Parameter Optimization of Mega-Sub-Isolation System

    OpenAIRE

    Xiangxiu Li; Ping Tan; Xiaojun Li; Aiwen Liu

    2016-01-01

    The equation of motion of mega-sub-isolation system is established. The working mechanism of the mega-sub-isolation system is obtained by systematically investigating its dynamic characteristics corresponding to various structural parameters. Considering the number and location of the isolated substructures, a procedure to optimally design the isolator parameters of the mega-sub-isolation system is put forward based on the genetic algorithm with base shear as the optimization objective. The i...

  6. Computer-aided method for automated selection of optimal imaging plane for measurement of total cerebral blood flow by MRI

    Science.gov (United States)

    Teng, Pang-yu; Bagci, Ahmet Murat; Alperin, Noam

    2009-02-01

    A computer-aided method for finding an optimal imaging plane for simultaneous measurement of the arterial blood inflow through the 4 vessels leading blood to the brain by phase contrast magnetic resonance imaging is presented. The method performance is compared with manual selection by two observers. The skeletons of the 4 vessels for which centerlines are generated are first extracted. Then, a global direction of the relatively less curved internal carotid arteries is calculated to determine the main flow direction. This is then used as a reference direction to identify segments of the vertebral arteries that strongly deviates from the main flow direction. These segments are then used to identify anatomical landmarks for improved consistency of the imaging plane selection. An optimal imaging plane is then identified by finding a plane with the smallest error value, which is defined as the sum of the angles between the plane's normal and the vessel centerline's direction at the location of the intersections. Error values obtained using the automated and the manual methods were then compared using 9 magnetic resonance angiography (MRA) data sets. The automated method considerably outperformed the manual selection. The mean error value with the automated method was significantly lower than the manual method, 0.09+/-0.07 vs. 0.53+/-0.45, respectively (p<.0001, Student's t-test). Reproducibility of repeated measurements was analyzed using Bland and Altman's test, the mean 95% limits of agreements for the automated and manual method were 0.01~0.02 and 0.43~0.55 respectively.

  7. Optimization of RNA Purification and Analysis for Automated, Pre-Symptomatic Disease Diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, A; Nasarabadi, S; Milanovich, F

    2005-06-28

    When diagnosing disease, time is often a more formidable enemy than the pathogen itself. Current detection methods rely primarily on post-symptomatic protein production (i.e. antibodies), which does not occur in noticeable levels until several weeks after infection. As such, a major goal among researchers today is to expedite pre-symptomatic disease recognition and treatment. Since most pathogens are known to leave a unique signature on the genetic expression of the host, one potential diagnostic tool is host mRNA. In my experiments, I examined several methods of isolating RNA and reading its genetic sequence. I first used two types of reverse transcriptase polymerase chain reactions (using commercial RNA) and examined the resultant complementary DNA through gel electrophoresis. I then proceeded to isolate and purify whole RNA from actual human monocytes and THP-1 cells using several published methods, and examined gene expression on the RNA itself. I compared the two RT-PCR methods and concluded that a double step RT-PCR is superior to the single step method. I also compared the various techniques of RNA isolation by examining the yield and purity of the resultant RNA. Finally, I studied the level of cellular IL-8 and IL-1 gene expression, two genes involved in the human immune response, which can serve as a baseline for future genetic comparison with LPS-exposed cells. Based on the results, I have determined which conditions and procedures are optimal for RNA isolation, RT-PCR, and RNA yield assessment. The overall goal of my research is to develop a flow-through system of RNA analysis, whereby blood samples can be collected and analyzed for disease prior to the onset of symptoms. The Pathomics group hopes to automate this process by removing the human labor factor, thereby decreasing the procedure's cost and increasing its availability to the general population. Eventually, our aim is to have an autonomous diagnostic system based on RNA analysis that would

  8. Automated Method for Estimating Nutation Time Constant Model Parameters for Spacecraft Spinning on Axis

    Science.gov (United States)

    2008-01-01

    Calculating an accurate nutation time constant (NTC), or nutation rate of growth, for a spinning upper stage is important for ensuring mission success. Spacecraft nutation, or wobble, is caused by energy dissipation anywhere in the system. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and, if it is in a state of resonance, the NTC can become short enough to violate mission constraints. The Spinning Slosh Test Rig (SSTR) is a forced-motion spin table where fluid dynamic effects in full-scale fuel tanks can be tested in order to obtain key parameters used to calculate the NTC. We accomplish this by independently varying nutation frequency versus the spin rate and measuring force and torque responses on the tank. This method was used to predict parameters for the Genesis, Contour, and Stereo missions, whose tanks were mounted outboard from the spin axis. These parameters are incorporated into a mathematical model that uses mechanical analogs, such as pendulums and rotors, to simulate the force and torque resonances associated with fluid slosh.

  9. A comparison between gradient descent and stochastic approaches for parameter optimization of a sea ice model

    Science.gov (United States)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2013-07-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model of the Arctic, and applicability and efficiency of the respective methods were examined. One optimization utilizes a finite difference (FD) method based on a traditional gradient descent approach, while the other adopts a micro-genetic algorithm (μGA) as an example of a stochastic approach. The optimizations were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. A series of optimizations were conducted that differ in the model formulation ("smoothed code" versus standard code) with respect to the FD method and in the population size and number of possibilities with respect to the μGA method. The FD method fails to estimate optimal parameters due to the ill-shaped nature of the cost function caused by the strong non-linearity of the system, whereas the genetic algorithms can effectively estimate near optimal parameters. The results of the study indicate that the sophisticated stochastic approach (μGA) is of practical use for parameter optimization of a coupled ocean-sea ice model with a medium-sized horizontal resolution of 50 km × 50 km as used in this study.

  10. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  11. Themoeconomic optimization of triple pressure heat recovery steam generator operating parameters for combined cycle plants

    OpenAIRE

    Mohammd Mohammed S.; Petrović Milan V.

    2015-01-01

    The aim of this work is to develop a method for optimization of operating parameters of a triple pressure heat recovery steam generator. Two types of optimization: (a) thermodynamic and (b) thermoeconomic were preformed. The purpose of the thermodynamic optimization is to maximize the efficiency of the plant. The selected objective for this purpose is minimization of the exergy destruction in the heat recovery steam generator (HRSG). The purpose of the ther...

  12. Feasibility Preserving Constraint-Handling Strategies for Real Parameter Evolutionary Optimization

    OpenAIRE

    Padhye, Nikhil; Mittal, Pulkit; Deb, Kalyanmoy

    2015-01-01

    Evolutionary Algorithms (EAs) are being routinely applied for a variety of optimization tasks, and real-parameter optimization in the presence of constraints is one such important area. During constrained optimization EAs often create solutions that fall outside the feasible region; hence a viable constraint- handling strategy is needed. This paper focuses on the class of constraint-handling strategies that repair infeasible solutions by bringing them back into the search space and explicitly...

  13. Optimal fidelity of teleportation with continuous variables using three tunable parameters in a realistic environment

    Science.gov (United States)

    Hu, Li-Yun; Liao, Zeyang; Ma, Shengli; Zubairy, M. Suhail

    2016-03-01

    We introduce three tunable parameters to optimize the fidelity of quantum teleportation with continuous variables in a nonideal scheme. By using the characteristic-function formalism, we present the condition that the teleportation fidelity is independent of the amplitude of input coherent states for any entangled resource. Then we investigate the effects of tunable parameters on the fidelity with or without the presence of the environment and imperfect measurements by analytically deriving the expression of fidelity for three different input coherent-state distributions. It is shown that, for the linear distribution, the optimization with three tunable parameters is the best one with respect to single- and two-parameter optimization. Our results reveal the usefulness of tunable parameters for improving the fidelity of teleportation and the ability against decoherence.

  14. Optimization of processing parameters for rheo-casting AZ91D magnesium alloy

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A design of experiment technique was used to optimize the microstructure of the AZ91D alloy produced by rheo-castingThe experimental design consists of four parameters (pouting temperature, shearing temperature, shearing time and shearing rate)with three levels. The grain size and shape factor measurements of primary α-Mg particles were conducted to determine the mierostrueture. The contribution of each parameter shows that pouring temperature is the most significant parameter affecting the grain size, and the shape factor highly depends on the shearing temperature. The optimized rheo-casting processing parameters are 650 ℃ for pouring temperature, 585 ℃ for shearing temperature, 40 s for shearing time, and 600 r/min for shearing rate. Under the optimized processing parameters, the average grain size is 28.53 μn, and the shape factor is 0.591.

  15. Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems

    Science.gov (United States)

    Rao, R. V.; Savsani, V. J.; Balic, J.

    2012-12-01

    An efficient optimization algorithm called teaching-learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The algorithm is tested on 25 different unconstrained benchmark functions and 35 constrained benchmark functions with different characteristics. For the constrained benchmark functions, TLBO is tested with different constraint handling techniques such as superiority of feasible solutions, self-adaptive penalty, ɛ-constraint, stochastic ranking and ensemble of constraints. The performance of the TLBO algorithm is compared with that of other optimization algorithms and the results show the better performance of the proposed algorithm.

  16. Network optimization including gas lift and network parameters under subsurface uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Baffoe, J.; Pajonk, O. [SPT Group GmbH, Hamburg (Germany); Badalov, H.; Huseynov, S. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE; Trick, M. [SPT Group, Calgary, AB (Canada)

    2013-08-01

    Optimization of oil and gas field production systems poses a great challenge to field development due to complex and multiple interactions between various operational design parameters and subsurface uncertainties. Conventional analytical methods are capable of finding local optima based on single deterministic models. They are less applicable for efficiently generating alternative design scenarios in a multi-objective context. Practical implementations of robust optimization workflows integrate the evaluation of alternative design scenarios and multiple realizations of subsurface uncertainty descriptions. Production or economic performance indicators such as NPV (Net Present Value) are linked to a risk-weighted objective function definition to guide the optimization processes. This work focuses on an integrated workflow using a reservoir-network simulator coupled to an optimization framework. The work will investigate the impact of design parameters while considering the physics of the reservoir, wells, and surface facilities. Subsurface uncertainties are described by well parameters such as inflow performance. Experimental design methods are used to investigate parameter sensitivities and interactions. Optimization methods are used to find optimal design parameter combinations which improve key performance indicators of the production network system. The proposed workflow will be applied to a representative oil reservoir coupled to a network which is modelled by an integrated reservoir-network simulator. Gas-lift will be included as an explicit measure to improve production. An objective function will be formulated for the net present value of the integrated system including production revenue and facility costs. Facility and gas lift design parameters are tuned to maximize NPV. Well inflow performance uncertainties are introduced with an impact on gas lift performance. Resulting variances on NPV are identified as a risk measure for the optimized system design. A

  17. Optimal Design of the Transverse Flux Machine Using a Fitted Genetic Algorithm with Real Parameters

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2012-01-01

    This paper applies a fitted genetic algorithm (GA) to the optimal design of transverse flux machine (TFM). The main goal is to provide a tool for the optimal design of TFM that is an easy to use. The GA optimizes the analytic basic design of two TFM topologies: the C-core and the U-core. First......, the GA was designed with real parameters. A further, objective of the fitted GA is minimization of the computation time, related to the number of individuals, the number of generations and the types of operators and their specific parameters....

  18. Derivative-free optimization for parameter estimation in computational nuclear physics

    Science.gov (United States)

    Wild, Stefan M.; Sarich, Jason; Schunck, Nicolas

    2015-03-01

    We consider optimization problems that arise when estimating a set of unknown parameters from experimental data, particularly in the context of nuclear density functional theory. We examine the cost of not having derivatives of these functionals with respect to the parameters. We show that the POUNDERS code for local derivative-free optimization obtains consistent solutions on a variety of computationally expensive energy density functional calibration problems. We also provide a primer on the operation of the POUNDERS software in the Toolkit for advanced optimization.

  19. Derivative-free optimization for parameter estimation in computational nuclear physics

    CERN Document Server

    Wild, Stefan M; Schunck, Nicolas

    2014-01-01

    We consider optimization problems that arise when estimating a set of unknown parameters from experimental data, particularly in the context of nuclear density functional theory. We examine the cost of not having derivatives of these functionals with respect to the parameters. We show that the POUNDERS code for local derivative-free optimization obtains consistent solutions on a variety of computationally expensive energy density functional calibration problems. We also provide a primer on the operation of the POUNDERS software in the Toolkit for Advanced Optimization.

  20. Multi-Objective Optimization of Vehicle Suspension Parameters Considering Various Road Classes

    Directory of Open Access Journals (Sweden)

    Havelka Ferdinand

    2014-12-01

    Full Text Available Vehicle suspension optimization for various road classes travelled at different velocities is performed. Road excitation is modeled using a first order shaping filter. A half-car model is adopted to simulate the vehicle’s vertical dynamics. The excitation time delay between the rear and the front tire is modeled using Pade approximation. Suspension parameters are optimized using a random search method with respect to “comfort” and “sporty driving” considering the design constraints of the suspension and road holding and maximum suspension travel constraints. Optimal suspension parameters suitable for various road classes and vehicle velocities have been chosen.

  1. Sensitivity Analysis of the Optimal Parameter Settings of an LTE Packet Scheduler

    NARCIS (Netherlands)

    Fernandez Diaz, I.; Litjens, R.; Berg, J.L. van den; Dimitrova, D.C.; Spaey, K.

    2010-01-01

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various traff

  2. Experimental Verification of Statistically Optimized Parameters for Low-Pressure Cold Spray Coating of Titanium

    Directory of Open Access Journals (Sweden)

    Damilola Isaac Adebiyi

    2016-06-01

    Full Text Available The cold spray coating process involves many process parameters which make the process very complex, and highly dependent and sensitive to small changes in these parameters. This results in a small operational window of the parameters. Consequently, mathematical optimization of the process parameters is key, not only to achieving deposition but also improving the coating quality. This study focuses on the mathematical identification and experimental justification of the optimum process parameters for cold spray coating of titanium alloy with silicon carbide (SiC. The continuity, momentum and the energy equations governing the flow through the low-pressure cold spray nozzle were solved by introducing a constitutive equation to close the system. This was used to calculate the critical velocity for the deposition of SiC. In order to determine the input temperature that yields the calculated velocity, the distribution of velocity, temperature, and pressure in the cold spray nozzle were analyzed, and the exit values were predicted using the meshing tool of Solidworks. Coatings fabricated using the optimized parameters and some non-optimized parameters are compared. The coating of the CFD-optimized parameters yielded lower porosity and higher hardness.

  3. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning.

    Science.gov (United States)

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost.

  4. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning.

    Directory of Open Access Journals (Sweden)

    Kai Yit Kok

    Full Text Available The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost.

  5. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  6. Automated mapping of linear dunefield morphometric parameters from remotely-sensed data

    Science.gov (United States)

    Telfer, M. W.; Fyfe, R. M.; Lewin, S.

    2015-12-01

    Linear dunes are among the world's most common desert dune types, and typically occur in dunefields arranged in remarkably organized patterns extending over hundreds of kilometers. The causes of the patterns, formed by dunes merging, bifurcating and terminating, are still poorly understood, although it is widely accepted that they are emergent properties of the complex system of interactions between the boundary layer and an often-vegetated erodible substrate. Where such dunefields are vegetated, they are typically used as extensive rangeland, yet it is evident that many currently stabilized dunefields have been reactivated repeatedly during the late Quaternary. It has been suggested that dunefield patterning and the temporal evolution of dunefields are related, and thus there is considerable interest in better understanding the boundary conditions controlling dune patterning, especially given the possibility of reactivation of currently-stabilized dunefields under 21st century climate change. However, the time-consuming process of manual dune mapping has hampered attempts at quantitative description of dunefield patterning. This study aims to develop and test methods for delineating linear dune trendlines automatically from freely-available remotely sensed datasets. The highest resolution free global topographic data presently available (Aster GDEM v2) proved to be of marginal use, as the topographic expression of the dunes is of the same order as the vertical precision of the dataset (∼10 m), but in regions with relatively simple patterning it defined dune trends adequately. Analysis of spectral data (panchromatic Landsat 8 data) proved more promising in five of the six test sites, and despite poor panchromatic signal/noise ratios for the sixth site, the reflectance in the deep blue/violet (Landsat 8 Band 1) offers an alternative method of delineating dune pattern. A new edge detection algorithm (LInear Dune Optimized edge detection; LIDO) is proposed, based on

  7. Prediction Model of Battery State of Charge and Control Parameter Optimization for Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2015-07-01

    Full Text Available This paper presents the construction of a battery state of charge (SOC prediction model and the optimization method of the said model to appropriately control the number of parameters in compliance with the SOC as the battery output objectives. Research Centre for Electrical Power and Mechatronics, Indonesian Institute of Sciences has tested its electric vehicle research prototype on the road, monitoring its voltage, current, temperature, time, vehicle velocity, motor speed, and SOC during the operation. Using this experimental data, the prediction model of battery SOC was built. Stepwise method considering multicollinearity was able to efficiently develops the battery prediction model that describes the multiple control parameters in relation to the characteristic values such as SOC. It was demonstrated that particle swarm optimization (PSO succesfully and efficiently calculated optimal control parameters to optimize evaluation item such as SOC based on the model.

  8. Optimization of machining parameters of turning operations based on multi performance criteria

    Directory of Open Access Journals (Sweden)

    N.K.Mandal

    2013-01-01

    Full Text Available The selection of optimum machining parameters plays a significant role to ensure quality of product, to reduce the manufacturing cost and to increase productivity in computer controlled manufacturing process. For many years, multi-objective optimization of turning based on inherent complexity of process is a competitive engineering issue. This study investigates multi-response optimization of turning process for an optimal parametric combination to yield the minimum power consumption, surface roughness and frequency of tool vibration using a combination of a Grey relational analysis (GRA. Confirmation test is conducted for the optimal machining parameters to validate the test result. Various turning parameters, such as spindle speed, feed and depth of cut are considered. Experiments are designed and conducted based on full factorial design of experiment.

  9. Aluminum-zinc alloy squeeze casting technological parameters optimization based on PSO and ANN

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a kind of ZA27 squeeze casting process parameter optimization method using artificial neural network (ANN) combined with the particle swarm optimizer (PSO). Regarding the test data as samples and using neural network create ZA27 squeeze casting process parameters and mechanical properties of nonlinear mapping model. Using PSO optimize the model and obtain the optimum value of the process parameters. Make full use of the non-neural network mapping capabilities and PSO global optimization capability. The network uses the radial direction primary function neural network, using the clustering and gradient method to make use of network learning, in order to enhance the generalization ability of the network. PSO takes dynamic changing inertia weights to accelerate the convergence speed and avoid a local minimum.

  10. Function Optimization and Parameter Performance Analysis Based on Gravitation Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-12-01

    Full Text Available The gravitational search algorithm (GSA is a kind of swarm intelligence optimization algorithm based on the law of gravitation. The parameter initialization of all swarm intelligence optimization algorithms has an important influence on the global optimization ability. Seen from the basic principle of GSA, the convergence rate of GSA is determined by the gravitational constant and the acceleration of the particles. The optimization performances on six typical test functions are verified by the simulation experiments. The simulation results show that the convergence speed of the GSA algorithm is relatively sensitive to the setting of the algorithm parameters, and the GSA parameter can be used flexibly to improve the algorithm’s convergence velocity and improve the accuracy of the solutions.

  11. A New Chaotic Parameters Disturbance Annealing Neural Network for Solving Global Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    MA Wei; WANG Zheng-Ou

    2003-01-01

    Since there were few chaotic neural networks applicable to the global optimization, in this paper, we proposea new neural network model - chaotic parameters disturbance annealing (CPDA) network, which is superior to otherexisting neural networks, genetic algorithms, and simulated annealing algorithms in global optimization. In the presentCPDA network, we add some chaotic parameters in the energy function, which make the Hopfield neural network escapefrom the attraction of a local minimal solution and with the parameter p1 annealing, our model will converge to theglobal optimal solutions quickly and steadily. The converge ability and other characters are also analyzed in this paper.The benchmark examples show the present CPDA neuralnetwork's merits in nonlinear global optimization.

  12. Automated Large Scale Parameter Extraction of Road-Side Trees Sampled by a Laser Mobile Mapping System

    Science.gov (United States)

    Lindenbergh, R. C.; Berthold, D.; Sirmacek, B.; Herrero-Huerta, M.; Wang, J.; Ebersbach, D.

    2015-08-01

    In urbanized Western Europe trees are considered an important component of the built-up environment. This also means that there is an increasing demand for tree inventories. Laser mobile mapping systems provide an efficient and accurate way to sample the 3D road surrounding including notable roadside trees. Indeed, at, say, 50 km/h such systems collect point clouds consisting of half a million points per 100m. Method exists that extract tree parameters from relatively small patches of such data, but a remaining challenge is to operationally extract roadside tree parameters at regional level. For this purpose a workflow is presented as follows: The input point clouds are consecutively downsampled, retiled, classified, segmented into individual trees and upsampled to enable automated extraction of tree location, tree height, canopy diameter and trunk diameter at breast height (DBH). The workflow is implemented to work on a laser mobile mapping data set sampling 100 km of road in Sachsen, Germany and is tested on a stretch of road of 7km long. Along this road, the method detected 315 trees that were considered well detected and 56 clusters of tree points were no individual trees could be identified. Using voxels, the data volume could be reduced by about 97 % in a default scenario. Processing the results of this scenario took ~2500 seconds, corresponding to about 10 km/h, which is getting close to but is still below the acquisition rate which is estimated at 50 km/h.

  13. Automated detection of sleep apnea from electrocardiogram signals using nonlinear parameters

    International Nuclear Information System (INIS)

    Sleep apnoea is a very common sleep disorder which can cause symptoms such as daytime sleepiness, irritability and poor concentration. To monitor patients with this sleeping disorder we measured the electrical activity of the heart. The resulting electrocardiography (ECG) signals are both non-stationary and nonlinear. Therefore, we used nonlinear parameters such as approximate entropy, fractal dimension, correlation dimension, largest Lyapunov exponent and Hurst exponent to extract physiological information. This information was used to train an artificial neural network (ANN) classifier to categorize ECG signal segments into one of the following groups: apnoea, hypopnoea and normal breathing. ANN classification tests produced an average classification accuracy of 90%; specificity and sensitivity were 100% and 95%, respectively. We have also proposed unique recurrence plots for the normal, hypopnea and apnea classes. Detecting sleep apnea with this level of accuracy can potentially reduce the need of polysomnography (PSG). This brings advantages to patients, because the proposed system is less cumbersome when compared to PSG

  14. Automated optimization of measurement setups for the inspection of specular surfaces

    Science.gov (United States)

    Kammel, Soeren

    2002-02-01

    Specular surfaces are used in a wide variety of industrial and consumer products like varnished or chrome plated parts of car bodies, dies or molds. Defects of these parts reduce the quality regarding their visual appearance and/or their technical performance. Even defects that are only about 1 micrometer deep can lead to a rejection during quality control. Deflectometric techniques are an adequate approach to recognize and measure defects on specular surfaces, because the principle of measurement of these methods mimics the behavior of a human observer inspecting the surface. With these methods, the specular object is considered as a part of the optical system. Not the object itself but the surrounding that is reflected by the specular surface is observed in order to obtain information about the object. This technique has proven sensitive for slope and topography measurement. Inherited from the principle of measurement, especially surface parts with high curvature need a special illumination which surrounds the object under inspection to guarantee that light from any direction is reflected onto the sensor. Thus the design of a specific measurement setup requires a substantial engineering effort. To avoid the time consuming process of building, testing and redesigning the measurement setup, a system to simulate and automatically optimize the setup has been developed. Based on CAD data of the object under inspection and a model of the optical system, favorable realizations of the shape, the position and the pattern of the lighting device are determined. In addition, optimization of other system parameters, such as object position and distance relative to the camera, is performed. Finally, constraints are imposed to ascertain the feasibility of illumination system construction.

  15. Solar collector parameter identification from unsteady data by a discrete-gradient optimization algorithm

    Science.gov (United States)

    Hotchkiss, G. B.; Burmeister, L. C.; Bishop, K. A.

    1980-01-01

    A discrete-gradient optimization algorithm is used to identify the parameters in a one-node and a two-node capacitance model of a flat-plate collector. Collector parameters are first obtained by a linear-least-squares fit to steady state data. These parameters, together with the collector heat capacitances, are then determined from unsteady data by use of the discrete-gradient optimization algorithm with less than 10 percent deviation from the steady state determination. All data were obtained in the indoor solar simulator at the NASA Lewis Research Center.

  16. Optimization of parameters for the inline-injection system at Brookhaven Accelerator Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Parsa, Z. [Brookhaven National Lab., Upton, NY (United States); Ko, S.K. [Ulsan Univ. (Korea, Republic of)

    1995-10-01

    We present some of our parameter optimization results utilizing code PARMLEA, for the ATF Inline-Injection System. The new solenoid-Gun-Solenoid -- Drift-Linac Scheme would improve the beam quality needed for FEL and other experiments at ATF as compared to the beam quality of the original design injection system. To optimize the gain in the beam quality we have considered various parameters including the accelerating field gradient on the photoathode, the Solenoid field strengths, separation between the gun and entrance to the linac as well as the (type size) initial charge distributions. The effect of the changes in the parameters on the beam emittance is also given.

  17. Metamodel based optimization of material parameters in a finite element simulation of tensile tests

    Science.gov (United States)

    Brown, Justin; McKay, Cavendish

    2010-04-01

    We determine the optimum set of parameters for simulating a tensile test of a sample of Zytelnylon resin in a finite element model. Using manufacturer supplied data and initial tensile measurements as starting data, we use a metamodel based optimization scheme to iteratively improve the choice of parameters. The commercial finite element solver LS-DYNA and optimization package LS-Opt are used to assess the quality of the material parameter choice. A map of the response surface is presented to illustrate some challenges with the metamodel based approach.

  18. Resonance parameters based analysis for metallic thickness optimization of a bimetallic plasmonic structure

    Science.gov (United States)

    Bera, Mahua; Banerjee, Jayeta; Ray, Mina

    2014-02-01

    Metallic film thickness optimization in mono- and bimetallic plasmonic structures has been carried out in order to determine the correct device parameters. Different resonance parameters, such as reflectivity, phase, field enhancement, and the complex amplitude reflectance Argand diagram (CARAD), have been investigated for the proposed optimization procedure. Comparison of mono- and bimetallic plasmonic structures has been carried out in the context of these resonance parameters with simultaneous angular and spectral interrogation. Differential phase analysis has also been performed and its application to sensing has been discussed along with a proposed interferometric set-up.

  19. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  20. Process Parameter Optimization of the Pulsed Current Argon Tungsten Arc Welding of Titanium Alloy

    Institute of Scientific and Technical Information of China (English)

    M.Balasubramanian; V.Jayabalan; V.Balasubramanian

    2008-01-01

    The selection of process parameters for obtaining optimal tensile properties in the pulsed current gas tungsten arc welding is presented. The tensile properties include ultimate tensile strength, yield strength and notch tensile strength. All these characteristics are considered together in the selection of process parameters by modified taguchi method to analyse the effect of each welding process parameter on tensile properties. Experimental results are furnished to illustrate the approach.

  1. Nonlinear Time Series Prediction Using LS-SVM with Chaotic Mutation Evolutionary Programming for Parameter Optimization

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Nonlinear time series prediction is studied by using an improved least squares support vector machine (LSSVM) regression based on chaotic mutation evolutionary programming (CMEP) approach for parameter optimization.We analyze how the prediction error varies with different parameters (σ, γ) in LS-SVM. In order to select appropriate parameters for the prediction model, we employ CMEP algorithm. Finally, Nasdaq stock data are predicted by using this LS-SVM regression based on CMEP, and satisfactory results are obtained.

  2. A Multi-Criteria Framework with Voxel-Dependent Parameters for Radiotherapy Treatment Plan Optimization

    CERN Document Server

    Zarepisheh, Masoud; Li, Nan; Jia, Xun; Jiang, Steve B

    2012-01-01

    In a treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. To answer questions related to this problem, we establish in this work a new mathematical framework equipped with two theorems. The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the different effects of adjusting weighting factors versus reference doses in the optimization process. The main discoveries are threefold: 1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model sin...

  3. Multi-objective Optimization of Continuous Drive Friction Welding Process Parameters Using Response Surface Methodology with Intelligent Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    P M AJITH; T MAFSAL HUSAIN; P SATHIYA; S ARAVINDAN

    2015-01-01

    The optimum friction welding (FW) parameters of duplex stainless steel (DSS) UNS S32205 joint was determined. The experiment was carried out as the central composite array of 30 experiments. The selected input parameters were friction pressure (F), upset pressure (U), speed (S) and burn-off length (B), and responses were hardness and ultimate tensile strength. To achieve the quality of the welded joint, the ultimate tensile strength and hardness were maximized, and response surface methodology (RSM) was applied to create separate regression equations of tensile strength and hardness. Intelligent optimization technique such as genetic algorithm was used to predict the Pareto optimal solutions. Depending upon the application, preferred suitable welding parameters were selected. It was inferred that the changing hardness and tensile strength of the friction welded joint inlfuenced the upset pressure, friction pressure and speed of rotation.

  4. Optimization of Process Parameters in Turning of AISI 8620 Steel Using Taguchi and Grey Taguchi Analysis

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Sharma

    2014-03-01

    Full Text Available The aim of this research is to investigate the optimization of cutting parameters (cutting speed, feed rate and depth of cut for surface roughness and metal removal rate in turning of AISI 8620 steel using coated carbide insert. Experiments have been carried out based on Taguchi L9 standard orthogonal array design with three process parameters namely cutting speed, feed rate and depth of cut for surface roughness and metal removal rate. The objective function has been chosen in relation to surface roughness and metal removal rate for quality target. Optimal parameters contribution of the CNC turning operation was obtained via grey relational analysis. The analysis of variance is applied to identify the most significant factor. Experiment with the optimized parameter setting, which has been obtained from the analysis, are giving to validate the results.

  5. Effect of experimental parameters on optimal reflection of light from opaque media

    CERN Document Server

    Anderson, Benjamin R; Eilers, Hergen

    2016-01-01

    Previously we considered the effect of experimental parameters on optimized transmission through opaque media using spatial light modulator (SLM)-based wavefront shaping. In this study we consider the opposite geometry, in which we optimize reflection from an opaque surface such that the backscattered light is focused onto a spot on an imaging detector. By systematically varying different experimental parameters (genetic algorithm iterations, bin size, SLM active area, target area, spot size, and sample angle with respect to the optical axis) and optimizing the reflected light we determine how each parameter affects the intensity enhancement. We find that the effects of the experimental parameters on the enhancement are similar to those measured for a transmissive geometry, but with the exact functional forms changed due to the different geometry and the use of a genetic algorithm instead of an iterative algorithm. Additionally, we find preliminary evidence of greater enhancements than predicted by random mat...

  6. Optimizing reliability, maintainability and testability parameters of equipment based on GSPN

    Institute of Scientific and Technical Information of China (English)

    Yongcheng Xu

    2015-01-01

    Reliability, maintainability and testability (RMT) are important properties of equipment, since they have important influ-ence on operational availability and life cycle costs (LCC). There-fore, weighting and optimizing the three properties are of great significance. A new approach for optimization of RMT parameters is proposed. First of al , the model for the equipment operation pro-cess is established based on the generalized stochastic Petri nets (GSPN) theory. Then, by solving the GSPN model, the quantitative relationship between operational availability and RMT parameters is obtained. Afterwards, taking history data of similar equipment and operation process into consideration, a cost model of design, manufacture and maintenance is developed. Based on operational availability, the cost model and parameters ranges, an optimization model of RMT parameters is built. Final y, the effectiveness and practicability of this approach are validated through an example.

  7. Guiding automated NMR structure determination using a global optimization metric, the NMR DP score

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yuanpeng Janet, E-mail: yphuang@cabm.rutgers.edu; Mao, Binchen; Xu, Fei; Montelione, Gaetano T., E-mail: gtm@rutgers.edu [Rutgers, The State University of New Jersey, Department of Molecular Biology and Biochemistry, Center for Advanced Biotechnology and Medicine, and Northeast Structural Genomics Consortium (United States)

    2015-08-15

    ASDP is an automated NMR NOE assignment program. It uses a distinct bottom-up topology-constrained network anchoring approach for NOE interpretation, with 2D, 3D and/or 4D NOESY peak lists and resonance assignments as input, and generates unambiguous NOE constraints for iterative structure calculations. ASDP is designed to function interactively with various structure determination programs that use distance restraints to generate molecular models. In the CASD–NMR project, ASDP was tested and further developed using blinded NMR data, including resonance assignments, either raw or manually-curated (refined) NOESY peak list data, and in some cases {sup 15}N–{sup 1}H residual dipolar coupling data. In these blinded tests, in which the reference structure was not available until after structures were generated, the fully-automated ASDP program performed very well on all targets using both the raw and refined NOESY peak list data. Improvements of ASDP relative to its predecessor program for automated NOESY peak assignments, AutoStructure, were driven by challenges provided by these CASD–NMR data. These algorithmic improvements include (1) using a global metric of structural accuracy, the discriminating power score, for guiding model selection during the iterative NOE interpretation process, and (2) identifying incorrect NOESY cross peak assignments caused by errors in the NMR resonance assignment list. These improvements provide a more robust automated NOESY analysis program, ASDP, with the unique capability of being utilized with alternative structure generation and refinement programs including CYANA, CNS, and/or Rosetta.

  8. Push-through direct injection NMR: an optimized automation method applied to metabolomics

    Science.gov (United States)

    There is a pressing need to increase the throughput of NMR analysis in fields such as metabolomics and drug discovery. Direct injection (DI) NMR automation is recognized to have the potential to meet this need due to its suitability for integration with the 96-well plate format. ...

  9. Global parameter optimization of Mather type plasma focus in the framework of the Gratton-Vargas two-dimensional snowplow model

    CERN Document Server

    Auluck, S K H

    2014-01-01

    Dense Plasma Focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding of short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool which can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited [S K H Auluck, Physics of Plasmas 20, 112501 (2013)] Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather type plasma focus fitted to thousands of automated computations, which enables construction of such design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a 4-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy ...

  10. Automated criterion-based analysis for Cole parameters assessment from cerebral neonatal electrical bioimpedance spectroscopy measurements

    International Nuclear Information System (INIS)

    Hypothermia has been proven as an effective rescue therapy for infants with moderate or severe neonatal hypoxic ischemic encephalopathy. Hypoxia-ischemia alters the electrical impedance characteristics of the brain in neonates; therefore, spectroscopic analysis of the cerebral bioimpedance of the neonate may be useful for the detection of candidate neonates eligible for hypothermia treatment. Currently, in addition to the lack of reference bioimpedance data obtained from healthy neonates, there is no standardized approach established for bioimpedance spectroscopy data analysis. In this work, cerebral bioimpedance measurements (12 h postpartum) in a cross-section of 84 term and near-term healthy neonates were performed at the bedside in the post-natal ward. To characterize the impedance spectra, Cole parameters (R0, R∞, fC and α) were extracted from the obtained measurements using an analysis process based on a best measurement and highest likelihood selection process. The results obtained in this study complement previously reported work and provide a standardized criterion-based method for data analysis. The availability of electrical bioimpedance spectroscopy reference data and the automatic criterion-based analysis method might support the development of a non-invasive method for prompt selection of neonates eligible for cerebral hypothermic rescue therapy. (paper)

  11. GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.

    Science.gov (United States)

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-01-01

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such

  12. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization

    Science.gov (United States)

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm. PMID:27632581

  13. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    Science.gov (United States)

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm. PMID:27632581

  14. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  15. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    in a Fisherian sense, is given. The solution is investigated by a simulation study. It is shown that if the experimental length T1 is fixed it may be useful to sample the record at a high sampling rate, since more measurements from the system are then collected. No optimal sampling interval exists......Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal....... But if the total number of sample points N is fixed an optimal sampling interval exists. Then it is far worse to use a too large sampling interval than a too small one since the information losses increase rapidly when the sampling interval increases from the optimal value....

  16. Analysis of Process Parameters for Optimization of Plastic Extrusion in Pipe Manufacturing

    Directory of Open Access Journals (Sweden)

    Mr. Sandip S. Gadekar

    2015-05-01

    Full Text Available The objective of this paper is to study the defects in the plastic pipe, to optimize the plastic pipe manufacturing process. It is very essential to learn the process parameter and the defect in the plastic pipe manufacturing process to optimize it. For the optimization Taguchi techniques is used in this paper. For the research work Shivraj HY-Tech Drip Irrigation pipe manufacturing, Company was selected. This paper is specifically design for the optimization in the current process. The experiment was analyzed using commercial Minitab16 software, interpretation has made, and optimized factor settings were chosen. After prediction of result the quality loss is calculated and it is compare with before implementation of DOE. The research works has improves the Production, quality and optimizes the process.

  17. Multi-response optimization of Micro-EDM process parameters on AISI304 steel using TOPSIS

    Energy Technology Data Exchange (ETDEWEB)

    Manivannan, R.; Kumar, M. Pradeep [CEG, Anna University, Chennai (India)

    2016-01-15

    The Technique for order preference by similarity to ideal solution (TOPSIS) method of optimization is used to analyze the process parameters of the micro-Electrical discharge machining (micro-EDM) of an AISI 304 steel with multi-performance characteristics. The Taguchi method of experimental design L27 is performed to obtain the optimal parameters for inputs, including feed rate, current, pulse on time, and gap voltage. Several output responses, such as the material removal rate, electrode wear rate, overcut, taper angle, and circularity at entry and exit points, are analyzed for the optimal conditions. Among all the investigated parameters, feed rate exerts a greater influence on the hole quality. ANOVA is employed to identify the contribution of each experiment. The optimal level of parameter setting is maintained at a feed rate of 4 μm/s, a current of 10 A, a pulse on time of 10 μs, and a gap voltage of 10 V. Scanning electron microscope analysis is conducted to examine the hole quality. The experimental results indicate that the optimal level of the process parameter setting over the overall performance of the micro-EDM is improved through TOPSIS.

  18. Parametric optimal bounded feedback control for smart parameter-controllable composite structures

    Science.gov (United States)

    Ying, Z. G.; Ni, Y. Q.; Duan, Y. F.

    2015-03-01

    Deterministic and stochastic parametric optimal bounded control problems are presented for smart composite structures such as magneto-rheological visco-elastomer based sandwich beam with controllable bounded parameters subjected to initial disturbances and stochastic excitations. The parametric controls by actively adjusting system parameters differ from the conventional additive controls by systemic external inputs. The dynamical programming equations for the optimal parametric controls are derived based on the deterministic and stochastic dynamical programming principles. The optimal bounded functions of controls are firstly obtained from the equations with the bounded control constraints based on the bang-bang control strategy. Then the optimal bounded parametric control laws are obtained by the inversion of the nonlinear functions. The stability of the optimally controlled systems is proved according to the Lyapunov method. Finally, the proposed optimal bounded parametric feedback control strategy is applied to single-degree-of-freedom and two-degree-of-freedom dynamic systems with nonlinear parametric bounded control terms under initial disturbances and earthquake excitations and then to a magneto-rheological visco-elastomer based sandwich beam system with nonlinear parametric bounded control terms under stochastic excitations. The effective vibration suppression is illustrated with numerical results. The proposed optimal parametric control strategy is applicable to other smart composite structures with nonlinear controllable parameters.

  19. Multi-parameter Optimization of a Thermoelectric Power Generator and Its Working Conditions

    Science.gov (United States)

    Zhang, T.

    2016-09-01

    The global optimal working conditions and optimal couple design for thermoelectric (TE) generators with realistic thermal coupling between the heat reservoirs and the TE couple were studied in the current work. The heat fluxes enforced by the heat reservoirs at the hot and the cold junctions of the TE couple were used in combination with parameter normalization to obtain a single cubic algebraic equation relating the temperature differences between the TE couple junctions and between the heat reservoirs, through the electric load resistance ratio, the reservoir thermal conductance ratio, the reservoir thermal conductance to the TE couple thermal conductance ratio, the Thomson to Seebeck coefficient ratio, and the figure of merit (Z) of the material based on the linear TE transport equations and their solutions. A broad reservoir thermal conductance ranging between 0.01 W/K and 100 W/K and TE element length ranging from 10-7 m to 10-3 m were explored to find the global optimal systems. The global optimal parameters related to the working conditions, i.e., reservoir thermal conductance ratio and electric load resistance ratio, and the optimal design parameter related to the TE couple were determined for a given TE material. These results demonstrated that the internal and external electric resistance, the thermal resistance between the reservoirs, the thermal resistance between the reservoir and the TE couple, and the optimal thermoelement length have to be well coordinated to obtain optimal power production.

  20. Optimization of hydrological parameters of a distributed runoff model based on multiple flood events

    Science.gov (United States)

    Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Anai, Hirokazu

    2015-04-01

    The error sources of flood forecasting by a runoff model commonly include input data, model structures, and parameter settings. This study focused on a calibration procedure to minimize errors due to parameter settings. Although many studies have been done on hydrological parameter optimization, they are mostly about individual optimization cases applying a specific optimization technique to a specific flood. Consequently, it is difficult to determine the most appropriate parameter set to make forecasts on future floods, because optimized parameter sets vary by flood type. Thus, this study aimed to develop a comprehensive method for optimizing hydrological parameters of a distributed runoff model for future flood forecasting. A distributed runoff model, PWRI-DHM, was applied to the Gokase River basin of 1,820km2 in Japan in this study. The model with gridded two-layer tanks for the entire target river basin includes hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which are set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and DSMW (Digital Soil Map of the World), were employed as input data such as elevation, land use and soil type. Thirteen optimization algorithms such as GA, PSO and DEA were carefully selected from seventy-four open-source algorithms available for public use. These algorithms were used with three error assessment functions to calibrate the parameters of the model to each of fifteen past floods in the predetermined search range. Fifteen optimized parameter sets corresponding to the fifteen past floods were determined by selecting the best sets from the calibration results in terms of reproducible accuracy. This process helped eliminate bias due to type of optimization algorithms. Although the calibration results of each parameter were widely distributed in the search range, statistical significance was found in comparisons between the optimized parameters

  1. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    Science.gov (United States)

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits. PMID:26227212

  2. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    Science.gov (United States)

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  3. Optimal parameters for the Green-Ampt infiltration model under rainfall conditions

    Directory of Open Access Journals (Sweden)

    Chen Li

    2015-06-01

    Full Text Available The Green-Ampt (GA model is widely used in hydrologic studies as a simple, physically-based method to estimate infiltration processes. The accuracy of the model for applications under rainfall conditions (as opposed to initially ponded situations has not been studied extensively. We compared calculated rainfall infiltration results for various soils obtained using existing GA parameterizations with those obtained by solving the Richards equation for variably saturated flow. Results provided an overview of GA model performance evaluated by means of a root-meansquare- error-based objective function across a large region in GA parameter space as compared to the Richards equation, which showed a need for seeking optimal GA parameters. Subsequent analysis enabled the identification of optimal GA parameters that provided a close fit with the Richards equation. The optimal parameters were found to substantially outperform the standard theoretical parameters, thus improving the utility and accuracy of the GA model for infiltration simulations under rainfall conditions. A sensitivity analyses indicated that the optimal parameters may change for some rainfall scenarios, but are relatively stable for high-intensity rainfall events.

  4. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  5. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    Energy Technology Data Exchange (ETDEWEB)

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  6. Themoeconomic optimization of triple pressure heat recovery steam generator operating parameters for combined cycle plants

    Directory of Open Access Journals (Sweden)

    Mohammd Mohammed S.

    2015-01-01

    Full Text Available The aim of this work is to develop a method for optimization of operating parameters of a triple pressure heat recovery steam generator. Two types of optimization: (a thermodynamic and (b thermoeconomic were preformed. The purpose of the thermodynamic optimization is to maximize the efficiency of the plant. The selected objective for this purpose is minimization of the exergy destruction in the heat recovery steam generator (HRSG. The purpose of the thermoeconomic optimization is to decrease the production cost of electricity. Here, the total annual cost of HRSG, defined as a sum of annual values of the capital costs and the cost of the exergy destruction, is selected as the objective function. The optimal values of the most influencing variables are obtained by minimizing the objective function while satisfying a group of constraints. The optimization algorithm is developed and tested on a case of CCGT plant with complex configuration. Six operating parameters were subject of optimization: pressures and pinch point temperatures of every three (high, intermediate and low pressure steam stream in the HRSG. The influence of these variables on the objective function and production cost are investigated in detail. The differences between results of thermodynamic and the thermoeconomic optimization are discussed.

  7. Automated optimal glycaemic control using a physiology based pharmacokinetic, pharmacodynamic model

    OpenAIRE

    Schaller, Stephan

    2015-01-01

    After decades of research, Automated Glucose Control (AGC) is still out of reach for everyday control of blood glucose. The inter- and intra-individual variability of glucose dynamics largely arising from variability in insulin absorption, distribution, and action, and related physiological lag-times remain a core problem in the development of suitable control algorithms. Over the years, model predictive control (MPC) has established itself as the gold standard in AGC systems in research. Mod...

  8. D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks

    CERN Document Server

    Poczos, Barnabas

    2008-01-01

    We introduce a novel online Bayesian method for the identification of a family of noisy recurrent neural networks (RNNs). We develop Bayesian active learning technique in order to optimize the interrogating stimuli given past experiences. In particular, we consider the unknown parameters as stochastic variables and use the D-optimality principle, also known as `\\emph{infomax method}', to choose optimal stimuli. We apply a greedy technique to maximize the information gain concerning network parameters at each time step. We also derive the D-optimal estimation of the additive noise that perturbs the dynamical system of the RNN. Our analytical results are approximation-free. The analytic derivation gives rise to attractive quadratic update rules.

  9. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  10. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    Science.gov (United States)

    Belwanshi, Vinod; Topkar, Anita

    2016-05-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  11. Optimization of TRPO Process Parameters for Americium Extraction from High Level Waste

    Institute of Scientific and Technical Information of China (English)

    CHEN Jing; WANG Jianchen; SONG Chongli

    2001-01-01

    The numerical calculations for Am multistage fractional extraction by trialkyl phosphine oxide (TRPO) were verified by a hot test.1750 L/t-U high level waste (HLW) was used as the feed to the TRPO process.The analysis used the simple objective function to minimize the total waste content in the TRPO process streams.Some process parameters were optimized after other parameters were selected.The optimal process parameters for Am extraction by TRPO are:10 stages for extraction and 2 stages for scrubbing;a flow rate ratio of 0.931 for extraction and 4.42 for scrubbing;nitric acid concentration of 1.35 mol/L for the feed and 0.5 mol/L for the scrubbing solution.Finally,the nitric acid and Am concentration profiles in the optimal TRPO extraction process are given.

  12. The Parkinsonian Gait Spatiotemporal Parameters Quantified by a Single Inertial Sensor before and after Automated Mechanical Peripheral Stimulation Treatment

    Directory of Open Access Journals (Sweden)

    Ana Kleiner

    2015-01-01

    Full Text Available This study aims to evaluate the change in gait spatiotemporal parameters in subjects with Parkinson’s disease (PD before and after Automated Mechanical Peripheral Stimulation (AMPS treatment. Thirty-five subjects with PD and 35 healthy age-matched subjects took part in this study. A dedicated medical device (Gondola was used to administer the AMPS. All patients with PD were treated in off levodopa phase and their gait performances were evaluated by an inertial measurement system before and after the intervention. The one-way ANOVA for repeated measures was performed to assess the differences between pre- and post-AMPS and the one-way ANOVA to assess the differences between PD patients and the control group. Spearman’s correlations assessed the associations between patients with PD clinical status (H&Y and the percentage of improvement of the gait variables after AMPS (α<0.05 for all tests. The PD group had an improvement of 14.85% in the stride length; 14.77% in the gait velocity; and 29.91% in the gait propulsion. The correlation results showed that the higher the H&Y classification, the higher the stride length percentage of improvement. The treatment based on AMPS intervention seems to induce a better performance in the gait pattern of PD patients, mainly in intermediate and advanced stages of the condition.

  13. EXERGOECONOMIC OPTIMIZATION OF GAS TURBINE POWER PLANTS OPERATING PARAMETERS USING GENETIC ALGORITHMS: A CASE STUDY

    OpenAIRE

    Mofid Gorji-Bandpy; Hamed Goodarzian

    2011-01-01

    Exergoeconomic analysis helps designers to find ways to improve the performance of a system in a cost effective way. This can play a vital role in the analysis, design and optimization of thermal systems. Thermoeconomic optimization is a powerful and effective tool in finding the best solutions between the two competing objectives, minimizing economic costs and maximizing exergetic efficiency. In this paper, operating parameters of a gas turbine power plant that produce 140MW of electricity w...

  14. Bio-inspired optimization algorithms for optical parameter extraction of dielectric materials: A comparative study

    Science.gov (United States)

    Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul

    2016-10-01

    Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.

  15. OPTIMIZATION OF CUTTING PARAMETERS ON THE BASIS OF SEMANTIC NETWORK USAGE

    Directory of Open Access Journals (Sweden)

    V. M. Pashkevich

    2011-01-01

    Full Text Available The paper considers problems on accuracy assurance of machine component cutting while using edge tools. An approach based on artificial intelligence technologies in particular technologies of functional semantic networks. The paper analyzes a possibility to apply functional semantic networks for optimization of cutting parameters. An intellectual system intended for solution of applied problems is described in the paper. The paper reveals a system structure and an example for setting optimal cutting speed is cited in the paper. 

  16. Parameters identification of unknown delayed genetic regulatory networks by a switching particle swarm optimization algorithm

    OpenAIRE

    Tang, Y.; Wang, Z; J. Fang

    2011-01-01

    The official published version can be found at the link below. This paper presents a novel particle swarm optimization (PSO) algorithm based on Markov chains and competitive penalized method. Such an algorithm is developed to solve global optimization problems with applications in identifying unknown parameters of a class of genetic regulatory networks (GRNs). By using an evolutionary factor, a new switching PSO (SPSO) algorithm is first proposed and analyzed, where the velocity updating e...

  17. PI Stabilization for Congestion Control of AQM Routers with Tuning Parameter Optimization

    Directory of Open Access Journals (Sweden)

    S. Chebli

    2016-09-01

    Full Text Available In this paper, we consider the problem of stabilizing network using a new proportional- integral (PI based congestion controller in active queue management (AQM router; with appropriate model approximation in the first order delay systems, we seek a stability region of the controller by using the Hermite- Biehler theorem, which isapplicable to quasipolynomials. A Genetic Algorithm technique is employed to derive optimal or near optimal PI controller parameters.

  18. Determination of the Johnson-Cook Constitutive Model Parameters of Materials by Cluster Global Optimization Algorithm

    Science.gov (United States)

    Huang, Zhipeng; Gao, Lihong; Wang, Yangwei; Wang, Fuchi

    2016-06-01

    The Johnson-Cook (J-C) constitutive model is widely used in the finite element simulation, as this model shows the relationship between stress and strain in a simple way. In this paper, a cluster global optimization algorithm is proposed to determine the J-C constitutive model parameters of materials. A set of assumed parameters is used for the accuracy verification of the procedure. The parameters of two materials (401 steel and 823 steel) are determined. Results show that the procedure is reliable and effective. The relative error between the optimized and assumed parameters is no more than 4.02%, and the relative error between the optimized and assumed stress is 0.2% × 10-5. The J-C constitutive parameters can be determined more precisely and quickly than the traditional manual procedure. Furthermore, all the parameters can be simultaneously determined using several curves under different experimental conditions. A strategy is also proposed to accurately determine the constitutive parameters.

  19. Determination of the Johnson-Cook Constitutive Model Parameters of Materials by Cluster Global Optimization Algorithm

    Science.gov (United States)

    Huang, Zhipeng; Gao, Lihong; Wang, Yangwei; Wang, Fuchi

    2016-09-01

    The Johnson-Cook (J-C) constitutive model is widely used in the finite element simulation, as this model shows the relationship between stress and strain in a simple way. In this paper, a cluster global optimization algorithm is proposed to determine the J-C constitutive model parameters of materials. A set of assumed parameters is used for the accuracy verification of the procedure. The parameters of two materials (401 steel and 823 steel) are determined. Results show that the procedure is reliable and effective. The relative error between the optimized and assumed parameters is no more than 4.02%, and the relative error between the optimized and assumed stress is 0.2% × 10-5. The J-C constitutive parameters can be determined more precisely and quickly than the traditional manual procedure. Furthermore, all the parameters can be simultaneously determined using several curves under different experimental conditions. A strategy is also proposed to accurately determine the constitutive parameters.

  20. Optimal Step-wise Parameter Optimization of a FOREX Trading Strategy

    OpenAIRE

    Alberto De Santis; Umberto Dellepiane; Stefano Lucidi; Stefania Renzi

    2014-01-01

    The goal of trading simply consists in gaining profit by buying/selling a security: the difference between the entry and the exit price in a position determines the profit or loss of that trade. A trading strategy is used to identify proper conditions to trade a security. The role of optimization consists in finding the best conditions to start a trading maximizing the profit. In this general scenario, the strategy is trained on a chosen batch of data (training set) and applied on the next ba...

  1. A multicriteria framework with voxel-dependent parameters for radiotherapy treatment plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Uribe-Sanchez, Andres F.; Li, Nan; Jia, Xun; Jiang, Steve B., E-mail: Steve.Jiang@UTSouthwestern.edu [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037-0843 (United States)

    2014-04-15

    Purpose: To establish a new mathematical framework for radiotherapy treatment optimization with voxel-dependent optimization parameters. Methods: In the treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for a set of the objective functions associated to the organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. Furthermore, questions about the objective function selection and parameter adjustment to assure Pareto optimality as well as the relationship between the optimal solutions obtained from the organ-based and voxel-based models remain unanswered. To answer these questions, the authors establish in this work a new mathematical framework equipped with two theorems. Results: The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the impact of using different objective functions on plan qualities and Pareto surfaces. The main discoveries are threefold: (1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model since the Pareto surface is independent of the objective function selection and the entire Pareto surface could be generated as long as the objective function satisfies certain mathematical conditions; (2) All Pareto solutions generated by the organ-based model with different objective functions are parts of a unique Pareto surface generated by the voxel-based model with any appropriate objective function; (3) A much larger Pareto surface is explored by adjusting voxel-dependent parameters than by adjusting organ-dependent parameters, possibly

  2. Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yourim Yoon

    2015-01-01

    Full Text Available This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.

  3. Identification of Dynamic Parameters Based on Pseudo-Parallel Ant Colony Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHAO Feng-yao; MA Zhen-yue; ZHANG Yun-liang

    2007-01-01

    For the parameter identification of dynamic problems, a pseudo-parallel ant colony optimization (PPACO) algorithm based on graph-based ant system (AS) was introduced. On the platform of ANSYS dynamic analysis, the PPACO algorithm was applied to the identification of dynamic parameters successfully. Using simulated data of forces and displacements, elastic modulus E and damping ratio ξ was identified for a designed 3D finite element model, and the detailed identification step was given. Mathematical example and simulation example show that the proposed method has higher precision, faster convergence speed and stronger antinoise ability compared with the standard genetic algorithm and the ant colony optimization (ACO) algorithms.

  4. Optimization of the parameters of a virtual-cathode oscillator with an inhomogeneous magnetic field

    Science.gov (United States)

    Kurkin, S. A.; Koronovskii, A. A.; Khramov, A. E.; Kuraev, A. A.; Kolosov, S. V.

    2013-10-01

    A two-dimensional numerical model is used to study the generation of powerful microwave radiation in a vircator with an inhomogeneous magnetic field applied to focus a beam. The characteristics of the external inhomogeneous magnetic field are found to strongly affect the vircator generation characteristics. Mathematical optimization is used to search for the optimum parameters of the magnetic periodic focusing system of the oscillator in order to achieve the maximum power of the output microwave radiation. The dependences of the output vircator power on the characteristics of the external inhomogeneous magnetic field are studied near the optimum control parameters. The physical processes that occur in optimized virtual cathode oscillators are investigated.

  5. A Minimum Delta V Orbit Maintenance Strategy for Low-Altitude Missions Using Burn Parameter Optimization

    Science.gov (United States)

    Brown, Aaron J.

    2011-01-01

    Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance Delta V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this Delta V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An low-lunar orbit example demonstrates the Delta V savings from the feasible solution to the optimal solution. The strategy s extensibility to more complex missions is discussed, as well as the limitations of its use.

  6. CH4 parameter estimation in CLM4.5bgc using surrogate global optimization

    Directory of Open Access Journals (Sweden)

    J. Müller

    2015-01-01

    Full Text Available Over the anthropocene methane has increased dramatically. Wetlands are one of the major sources of methane to the atmosphere, but the role of changes in wetland emissions is not well understood. The Community Land Model (CLM of the Community Earth System Models contains a module to estimate methane emissions from natural wetlands and rice paddies. Our comparison of CH4 emission observations at 16 sites around the planet reveals, however, that there are large discrepancies between the CLM predictions and the observations. The goal of our study is to adjust the model parameters in order to minimize the root mean squared error (RMSE between model predictions and observations. These parameters have been selected based on a sensitivity analysis. Because of the cost associated with running the CLM simulation (15 to 30 min on the Yellowstone Supercomputing Facility, only relatively few simulations can be allowed in order to find a near optimal solution within an acceptable time. Our results indicate that the parameter estimation problem has multiple local minima. Hence, we use a computationally efficient global optimization algorithm that uses a radial basis function (RBF surrogate model to approximate the objective function. We use the information from the RBF to select parameter values that are most promising with respect to improving the objective function value. We show with pseudo data that our optimization algorithm is able to make excellent progress with respect to decreasing the RMSE. Using the true CH4 emission observations for optimizing the parameters, we are able to significantly reduce the overall RMSE between observations and model predictions by about 50%. The CLM predictions with the optimized parameters agree for northern and tropical latitudes more with the observed data than when using the default parameters and the emission predictions are higher than with default settings in northern latitudes and lower than default settings in the

  7. OPTIMIZATION OF MACHINING PARAMETERS IN TURNING PROCESS USING GENETIC ALGORITHM AND PARTICLE SWARM OPTIMIZATION WITH EXPERIMENTAL VERIFICATION

    Directory of Open Access Journals (Sweden)

    K.RAMESH KUMAR

    2011-02-01

    Full Text Available Optimization of cutting parameters is one of the most important elements in any process planning of metal parts. Economy of machining operation plays a key role in competitiveness in the market. All CNCmachines produce finished components from cylindrical bar. Finished profiles consist of straight turning, facing, taper and circular machining. Finished profile from a cylindrical bar is done in two stages, rough machining and finish machining. Numbers of passes are required for rough machining and single pass is required for the finished pass. The machining parameters in multipass turning are depth of cut, cutting speed and feed. The machining performance is measured by the minimum production time. In this paper the optimal machining parameters for continuous profile machining are determinedwith respect to the minimum production time, subject to a set of practical constraints, cutting force, power and dimensional accuracy and surface finish. Due to complexity of this machining optimizationproblem, a genetic algorithm (GA and Particle Swarm Optimization (PSO are applied to resolve the problem and the results obtained from GA and PSO are compared.

  8. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared....... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...

  9. Design Optimization of RFI Parameters by Manufacturing T-shaped Composite Panel

    Institute of Scientific and Technical Information of China (English)

    ZHANG Guo-li; HUANG Gu

    2005-01-01

    The aim of this project is to develop a novel approach for optimizing design resin film infusion (RFI) processing parameters by manufacturing T-shaped composite panel. The dimensional accuracy was selected as the objective function. By investigating the rheological properties of resin film, the compaction behavior of fiber preform and characteristics of RFI process, an optimal mathematical model was established, it was found that the numerical results obtained from the RFICOMP program package have good consistency with the experimental results, and this optimization procedure can be applied to other composites manufacture processes.

  10. Parameter identification of a distributed runoff model by the optimization software Colleo

    Science.gov (United States)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi

    2015-04-01

    The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target

  11. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    OpenAIRE

    Jun Wang; Bihua Zhou; Shudao Zhou

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, resp...

  12. Parameters extraction of photovoltaic module for long-term prediction using Artifical Bee Colony optimization

    OpenAIRE

    Garoudja, Elyes; Kara, Kamel; Chouder, Aissa; Silvestre Bergés, Santiago

    2015-01-01

    In this paper, a heuristic optimization approach based on Artificial Bee Colony (ABC) algorithm is applied to the extraction of the five electrical parameters of a photovoltaic (PV) module. The proposed approach has several interesting features such as no prior knowledge of the physical system and its convergence is not dependent on the initial conditions. The extracted parameters have been tested against several static IV characteristics of different PV modules from diff...

  13. Optimization of regularization parameter of inversion in particle sizing using light extinction method

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In particle sizing by light extinction method, the regularization parameter plays an important role in applying regularization to find the solution to ill-posed inverse problems. We combine the generalized cross-validation (GCV) and L-curve criteria with the Twomey-NNLS algorithm in parameter optimization. Numerical simulation and experimental validation show that the resistance of the newly developed algorithms to measurement errors can be improved leading to stable inversion results for unimodal particle size distribution.

  14. OPTIMALITY CRITERIA FOR MEASUREMENT POSES SELECTION IN CALIBRATION OF ROBOT STIFFNESS PARAMETERS

    OpenAIRE

    Wu, Yier; Klimchik, Alexandr; Pashkevich, Anatol; Caro, Stéphane; Furet, Benoît

    2012-01-01

    International audience The paper focuses on the accuracy improvement of industrial robots by means of elasto-static parameters calibration. It proposes a new optimality criterion for measurement poses selection in calibration of robot stiffness parameters. This criterion is based on the concept of the manipulator test pose that is defined by the user via the joint angles and the external force. The proposed approach essentially differs from the traditional ones and ensures the best complia...

  15. Optimization of Torque Sensor Input Parameters and Determination of Sensor Errors and Uncertainties

    OpenAIRE

    2006-01-01

    This paper introduces the basic knowledge about magnetoelastic torque sensor designed for non-contact measurements. The paper brings results of Institutional project “The development and realization of torque sensor with appropriate equipment.” The optimization of torque sensor working conditions and sensor parameters are presented. The metrological parameters determined by testing showed availability of sensor using in outdoor applications. Another aim of the paper is evaluation of measureme...

  16. Optimization of torque sensor input parameters and determination of sensor errors and uncertainties

    OpenAIRE

    Jozef Vojtko

    2006-01-01

    This paper introduces the basic knowledge about magnetoelastic torque sensor designed for non-contact measurements. The paper brings results of Institutional project “The development and realization of torque sensor with appropriate equipment.” The optimization of torque sensor working conditions and sensor parameters are presented. The metrological parameters determined by testing showed availability of sensor using in outdoor applications. Another aim of the paper is evaluation of measureme...

  17. Optimizing advanced propeller designs by simultaneously updating flow variables and design parameters

    Science.gov (United States)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  18. Longitudinal parameter identification of a small unmanned aerial vehicle based on modified particle swarm optimization

    OpenAIRE

    Jiang Tieying; Li Jie; Huang Kewei

    2015-01-01

    This paper describes a longitudinal parameter identification procedure for a small unmanned aerial vehicle (UAV) through modified particle swam optimization (PSO). The procedure is demonstrated using a small UAV equipped with only an micro-electro-mechanical systems (MEMS) inertial measuring element and a global positioning system (GPS) receiver to provide test information. A small UAV longitudinal parameter mathematical model is derived and the modified method is proposed based on PSO with s...

  19. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    Science.gov (United States)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  20. Optimization of Soil Hydraulic Model Parameters Using Synthetic Aperture Radar Data: An Integrated Multidisciplinary Approach

    DEFF Research Database (Denmark)

    Pauwels, Valentijn; Balenzano, Anna; Satalino, Giuseppe;

    2009-01-01

    been focused on the retrieval of land and biogeophysical parameters (e.g., soil moisture contents). One relatively unexplored issue consists of the optimization of soil hydraulic model parameters, such its, for example, hydraulic conductivity, values, through remote sensing. This is due to the fact...... that no direct relationships between the remote-sensing observations, more specifically radar backscatter values, and the parameter values can be derived. However, land surface models can provide these relationships. The objective of this paper is to retrieve a number of soil physical model parameters through...... model is, thus, used to determine the relationship between the soil physical parameters and the remote-sensing data. An analysis is then performed, relating the retrieved soil parameters to the soil texture data available over the study area. The results of the study show that there is a potential...

  1. A novel optimization method of camera parameters used for vision measurement

    Science.gov (United States)

    Zhou, Fuqiang; Cui, Yi; Peng, Bin; Wang, Yexin

    2012-09-01

    Camera calibration plays an important role in the field of machine vision applications. During the process of camera calibration, nonlinear optimization technique is crucial to obtain the best performance of camera parameters. Currently, the existing optimization method aims at minimizing the distance error between the detected image point and the calculated back-projected image point, based on 2D image pixels coordinate. However, the vision measurement process is conducted in 3D space while the optimization method generally adopted is carried out in 2D image plane. Moreover, the error criterion with respect to optimization and measurement is different. In other words, the equal pixel distance error in 2D image plane leads to diverse 3D metric distance error at different position before the camera. All the reasons mentioned above will cause accuracy decrease for 3D vision measurement. To solve the problem, a novel optimization method of camera parameters used for vision measurement is proposed. The presented method is devoted to minimizing the metric distance error between the calculated point and the real point in 3D measurement coordinate system. Comparatively, the initial camera parameters acquired through linear calibration are optimized through two different methods: one is the conventional method and the other is the novel method presented by this paper. Also, the calibration accuracy and measurement accuracy of the parameters obtained by the two methods are thoroughly analyzed and the choice of a suitable accuracy evaluation method is discussed. Simulative and real experiments to estimate the performance of the proposed method on test data are reported, and the results show that the proposed 3D optimization method is quite efficient to improve measurement accuracy compared with traditional method. It can meet the practical requirement of high precision in 3D vision metrology engineering.

  2. Parameter identification theory of a complex model based on global optimization method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With the development of computer technology and numerical simulation technol- ogy, computer aided engineering (CAE) technology has been widely applied to many fields. One of the main obstacles, which hinder the further application of CAE technology, is how to successfully identify the parameters of the selected model. An elementary framework for parameter identification of a complex model is pro-vided in this paper. The framework includes the construction of objective function, the design of the optimization method and the evaluation of the identified results, etc. The parameter identification process is described in this framework, taking the parameter identification of the superplastic constitutive model considering grain growth for Ti-6Al-4V at 927℃ as an example. The objective function is the weighted quadratic sums of the difference between the experimental and computational data for the stress-strain relationship and the grain growth relationship; the designed optimization method is a hybrid global optimization method, which is based on the feature of the objective function and incorporates the strengths of genetic algo-rithm (GA), the Levenberg-Marquardt algorithm and the augmented Gauss-Newton algorithm. The reliability evaluation of parameter identification result is made through the comparison between the calculated and experimental results and be-tween the theoretical values of the parameters and the identified ones.

  3. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    Science.gov (United States)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  4. Optimization of the dressing parameters in cylindrical grinding based on a generalized utility function

    Science.gov (United States)

    Aleksandrova, Irina

    2016-01-01

    The existing studies, concerning the dressing process, focus on the major influence of the dressing conditions on the grinding response variables. However, the choice of the dressing conditions is often made, based on the experience of the qualified staff or using data from reference books. The optimal dressing parameters, which are only valid for the particular methods and dressing and grinding conditions, are also used. The paper presents a methodology for optimization of the dressing parameters in cylindrical grinding. The generalized utility function has been chosen as an optimization parameter. It is a complex indicator determining the economic, dynamic and manufacturing characteristics of the grinding process. The developed methodology is implemented for the dressing of aluminium oxide grinding wheels by using experimental diamond roller dressers with different grit sizes made of medium- and high-strength synthetic diamonds type ??32 and ??80. To solve the optimization problem, a model of the generalized utility function is created which reflects the complex impact of dressing parameters. The model is built based on the results from the conducted complex study and modeling of the grinding wheel lifetime, cutting ability, production rate and cutting forces during grinding. They are closely related to the dressing conditions (dressing speed ratio, radial in-feed of the diamond roller dresser and dress-out time), the diamond roller dresser grit size/grinding wheel grit size ratio, the type of synthetic diamonds and the direction of dressing. Some dressing parameters are determined for which the generalized utility function has a maximum and which guarantee an optimum combination of the following: the lifetime and cutting ability of the abrasive wheels, the tangential cutting force magnitude and the production rate of the grinding process. The results obtained prove the possibility of control and optimization of grinding by selecting particular dressing

  5. Optimization of Friction Welding Process Parameters for Joining Carbon Steel and Stainless Steel%Optimization of Friction Welding Process Parameters for Joining Carbon Steel and Stainless Steel

    Institute of Scientific and Technical Information of China (English)

    R Paventhan; P R Lakshminarayanan; V Balasubramanian

    2012-01-01

    Friction weIding is a solid state joining process used extensively currently owing to its advantages such as low heat input, high production efficiency, ease of manufacture, and environment friendliness. Materials difficult to be welded by fusion welding processes can be successfully welded by friction welding. An attempt was made to develop an empirical relationship to predict the tensile strength of friction welded AISI 1040 grade medium carbon steel and AISI 304 austenitic stainless steel, incorporating the process parameters such as friction pressure, forging pressure, friction time and forging time, which have great influence on strength of the joints. Response surface methodology was applied to optimize the friction welding process parameters to attain maximum tensile strength of the joint. The maximum tensile strength of 543 MPa could be obtained for the joints fabricated under the welding conditions of friction pressure of 90 MPa, forging pressure of 90 MPa, friction time of 6 s and forging time of 6 s.

  6. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  7. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  8. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  9. Optimization of injection molding parameters for poly(styrene-isobutylene-styrene) block copolymer

    Science.gov (United States)

    Fittipaldi, Mauro; Garcia, Carla; Rodriguez, Luis A.; Grace, Landon R.

    2016-03-01

    Poly(styrene-isobutylene-styrene) (SIBS) is a widely used thermoplastic elastomer in bioimplantable devices due to its inherent stability in vivo. However, the properties of the material are highly dependent on the fabrication conditions, molecular weight, and styrene content. An optimization method for injection molding is herein proposed which can be applied to varying SIBS formulations in order to maximize ultimate tensile strength, which is critical to certain load-bearing implantable applications. The number of injection molded samples required to ascertain the optimum conditions for maximum ultimate tensile strength is limited in order to minimize experimental time and effort. Injection molding parameters including nozzle temperature (three levels: 218, 246, and 274 °C), mold temperature (three levels: 50, 85, and 120 °C), injection speed (three levels: slow, medium and fast) and holding pressure time (three levels: 2, 6, and 10 seconds) were varied to fabricate dumbbell specimens for tensile testing. A three-level L9 Taguchi method utilizing orthogonal arrays was used in order to rank the importance of the different injection molding parameters and to find an optimal parameter setting to maximize the ultimate tensile strength of the thermoplastic elastomer. Based on the Taguchi design results, a Response Surface Methodology (RSM) was applied in order to build a model to predict the tensile strength of the material at different injection parameters. Finally, the model was optimized to find the injection molding parameters providing maximum ultimate tensile strength. Subsequently, the theoretically-optimum injection molding parameters were used to fabricate additional dumbbell specimens. The experimentally-determined ultimate tensile strength of these samples was found to be in close agreement (1.2%) with the theoretical results, successfully demonstrating the suitability of the Taguchi Method and RSM for optimizing injection molding parameters of SIBS.

  10. Research of Optimization Method of Swabbing Parameters of All Rods Pumping Wells in the Entire Oilfield

    Directory of Open Access Journals (Sweden)

    Zhang Xishun

    2013-03-01

    Full Text Available Aiming at the drawbacks of the optimization and design methods and the practical production goal of least energy consumption, a new theory is raised that the gas of the layer released energy in the lifting process including two parts: dissolved-gas expansion energy and free-gas expansion energy. The motor’s input power of rod pumping system is divided into hydraulic horse power, gas expansion power, surface mechanical loss power, subsurface loss power. Using the theory of energy-conservation, the simulation model of free-gas expansion power has been established, the simulating models of the motor’s input power which are based on the energy method have been improved and the simulation precision of system efficiency has been enhanced. The entire optimization design models have been set up in which the single-well output is taken as the optimum design variable, the planed production of all oil wells in an overall oilfield as the restraint condition and the least input power of the overall oilfield as the object. Synthesizing the optimization design results of the single well and the entire oilfield, the optimal output and the optimal swabbing parameters of all wells can be got. The actual optimizing examples show that the total power consumption designed by the entire optimization method is less 12.95% than that by the single optimization method.

  11. Fast reactor parameter optimization taking into account changes in fuel charge type during reactor operation time

    International Nuclear Information System (INIS)

    The formulation and solution of optimization problem for parameters determining the layout of the central part of sodium cooled power reactor taking into account possible changes in fuel charge type during reactor operation time are performed. The losses under change of fuel composition type for two reactor modifications providing for minimum doubling time for oxide and carbide fuels respectively, are estimated

  12. On Optimization Control Parameters in an Adaptive Error-Control Scheme in Satellite Networks

    Directory of Open Access Journals (Sweden)

    Ranko Vojinović

    2011-09-01

    Full Text Available This paper presents a method for optimization of control parameters of an adaptive GBN scheme in error-prone satellite channel. Method is based on the channel model with three state, where channel have the variable noise level.

  13. Adaptive Importance Sampling for Performance Evaluation and Parameter Optimization of Communication Systems

    NARCIS (Netherlands)

    Remondo, David; Srinivasan, Rajan; Nicola, Victor F.; Etten, van Wim C.; Tattje, Henk E.P.

    2000-01-01

    We present new adaptive importance sampling techniques based on stochastic Newton recursions. Their applicability to the performance evaluation of communication systems is studied. Besides bit-error rate (BER) estimation, the techniques are used for system parameter optimization. Two system models t

  14. TO THE QUESTION OF SOLVING OF THE PROBLEM OF OPTIMIZING PARAMETERS OF TRAFFIC FLOW COORDINATED CONTROL

    OpenAIRE

    L. Abramova; Chernobaev, N.

    2007-01-01

    A short review of main methods of traffic flow control is represented, great attention is paid to methods of coordinated control and quality characteristics of traffic control. The problem of parameter optimization of traffic coordinated control on the basis of vehicle delay minimizing at highway intersections has been defined.

  15. Optimization of the Process Parameters for Controlling Residual Stress and Distortion in Friction Stir Welding

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Schmidt, Henrik Nikolaj Blicher; Hattel, Jesper Henri

    2008-01-01

    In the present paper, numerical optimization of the process parameters, i.e. tool rotation speed and traverse speed, aiming minimization of the two conflicting objectives, i.e. the residual stresses and welding time, subjected to process-specific thermal constraints in friction stir welding, is i...

  16. Optimization of Temperature Schedule Parameters on Heat Supply in Power-and-Heat Supply Systems

    Directory of Open Access Journals (Sweden)

    V. A. Sednin

    2009-01-01

    Full Text Available The paper considers problems concerning optimization of a temperature schedule in the district heating systems with steam-turbine thermal power stations having average initial steam parameters. It has been shown in the paper that upkeeping of an optimum network water temperature permits to increase an energy efficiency of heat supply due to additional systematic saving of fuel. 

  17. Optimization of Polishing Parameters with Taguchi Method for LBO Crystal in CMP

    Institute of Scientific and Technical Information of China (English)

    Jun Li; Yongwei Zhu; Dunwen Zuo; Yong Zhu; Chuangtian Chen

    2009-01-01

    Chemical mechanical polishing (CMP) was used to polish Lithium triborate (UB_3O_5 or LBO) crystal. Taguchi method was applied for optimization of the polishing parameters. Material removal rate (MRR) and surface roughness are considered as criteria for the optimization. The polishing pressure, the abrasive concentration and the table velocity are important parameters which influence MRR and surface roughness in CMP of LBO crystal. Experiment results indicate that for MRR the polishing pressure is the most significant polishing parameter followed by table velocity; while for the surface roughness, the abrasive concentration is the most important one. For high MRR in CMP of LBO crystal the optimal conditions are: pressure 620 g/cm~2, concentration 5.0 wt pct, and velocity 60 r/min, respectively. For the best surface roughness the optimal conditions are: pressure 416 g/cm~2, concentration 5.0 wt pct, and velocity 40 r/min, respectively. The contributions of individual parameters for MRR and surface roughness were obtained.

  18. Optimization of WEDM process parameters using deep cryo-treated Inconel 718 as work material

    Directory of Open Access Journals (Sweden)

    Bijaya Bijeta Nayak

    2016-03-01

    Full Text Available The present work proposes an experimental investigation and optimization of various process parameters during taper cutting of deep cryo-treated Inconel 718 in wire electrical discharge machining process. Taguchi's design of experiment is used to gather information regarding the process with less number of experimental runs considering six input parameters such as part thickness, taper angle, pulse duration, discharge current, wire speed and wire tension. Since traditional Taguchi method fails to optimize multiple performance characteristics, maximum deviation theory is applied to convert multiple performance characteristics into an equivalent single performance characteristic. Due to the complexity and non-linearity involved in this process, good functional relationship with reasonable accuracy between performance characteristics and process parameters is difficult to obtain. To address this issue, the present study proposes artificial neural network (ANN model to determine the relationship between input parameters and performance characteristics. Finally, the process model is optimized to obtain a best parametric combination by a new meta-heuristic approach known as bat algorithm. The results of the proposed algorithm show that the proposed method is an effective tool for simultaneous optimization of performance characteristics during taper cutting in WEDM process.

  19. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Fitzek, Frank

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...

  20. High-resolution MRI of the labyrinth. Optimization of scan parameters with 3D-FSE

    International Nuclear Information System (INIS)

    The aim of our study was to optimize the parameters of high-resolution MRI of the labyrinth with a 3D fast spin-echo (3D-FSE) sequence. We investigated repetition time (TR), echo time (TE), Matrix, field of view (FOV), and coil selection in terms of CNR (contrast-to-noise ratio) and SNR (signal-to-noise ratio) by comparing axial images and/or three-dimensional images. The optimal 3D-FSE sequence parameters were as follows: 1.5 Tesla MR unit (Signa LX, GE Medical Systems), 3D-FSE sequence, dual 3-inch surface coil, acquisition time=12.08 min, TR=5000 msec, TE=300 msec, 3 number of excitations (NEX), FOV=12 cm, matrix=256 x 256, slice thickness=0.5 mm/0.0 sp, echo train=64, bandwidth=±31.5 kHz. High-resolution MRI of the labyrinth using the optimized 3D-FSE sequence parameters permits visualization of important anatomic details (such as scala tympani and scala vestibuli), making it possible to determine inner ear anomalies and the patency of cochlear turns. To obtain excellent heavily T2-weighted axial and three-dimensional images in the labyrinth, high CNR, SNR, and spatial resolution are significant factors at the present time. Furthermore, it is important not only to optimize the scan parameters of 3D-FSE but also to select an appropriate coil for high-resolution MRI of the labyrinth. (author)

  1. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1991-01-01

    The design of a measurement program devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost, i.e. the cost of failure and the cost of the measurement program. All the calculati...

  2. Cellular Neural Networks: A genetic algorithm for parameters optimization in artificial vision applications

    International Nuclear Information System (INIS)

    An optimization method for some of the CNN's (Cellular Neural Network) parameters, based on evolutionary strategies, is proposed. The new class of feedback template found is more effective in extracting features from the images that an autonomous vehicle acquires, than in the previous CNN's literature

  3. Multi Objective Optimization of Weld Parameters of Boiler Steel Using Fuzzy Based Desirability Function

    Directory of Open Access Journals (Sweden)

    M. Satheesh

    2014-01-01

    Full Text Available The high pressure differential across the wall of pressure vessels is potentially dangerous and has caused many fatal accidents in the history of their development and operation. For this reason the structural integrity of weldments is critical to the performance of pressure vessels. In recent years much research has been conducted to the study of variations in welding parameters and consumables on the mechanical properties of pressure vessel steel weldments to optimize weld integrity and ensure pressure vessels are safe. The quality of weld is a very important working aspect for the manufacturing and construction industries. Because of high quality and reliability, Submerged Arc Welding (SAW is one of the chief metal joining processes employed in industry. This paper addresses the application of desirability function approach combined with fuzzy logic analysis to optimize the multiple quality characteristics (bead reinforcement, bead width, bead penetration and dilution of submerged arc welding process parameters of SA 516 Grade 70 steels(boiler steel. Experiments were conducted using Taguchi’s L27 orthogonal array with varying the weld parameters of welding current, arc voltage, welding speed and electrode stickout. By analyzing the response table and response graph of the fuzzy reasoning grade, optimal parameters were obtained. Solutions from this method can be useful for pressure vessel manufacturers and operators to search an optimal solution of welding condition.

  4. Optimization of EDM Process Parameters on Titanium Super Alloys Based on the Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    J. Laxman, Dr. K. Guru Raj

    2014-05-01

    Full Text Available Electrical discharge machining (EDM is a unconventional machining process for the machining of complex shapes and hard materials that are difficult of machining by conventional machining process. In this paper deals with the optimization of EDM process parameters using the grey relational analysis (GRA based on an orthogonal array for the multi response process. The experiments are conducted on Titanium super alloys with copper electrode based on the Taguchi design of experiments L27 orthogonal array by choosing various parameters such as peak current, pulse on time, pulse off time and tool lift time for EDM process to obtain multiple process responses namely Metal removal rate (MRR and Tool Wear Rate (TWR. The combination of Taguchi method with GRA enables to determine the optimal parameters for multiple response process. Gray relational analysis is used to obtain a performance index called gray relational grade to optimize the EDM process with higher MRR and lower TWR and it is clearly found that the performance of the EDM has greatly increased by optimizing the responses the influence of individual machining parameters also investigated by using analysis of variance for the grey relational grade.

  5. Joint state and parameter estimation in particle filtering and stochastic optimization

    Institute of Scientific and Technical Information of China (English)

    Xiaojun YANG; Keyi XING; Kunlin SHI; Quan PAN

    2008-01-01

    In this paper,an adaptive estimation algorithm is proposed for non-linear dynamic systems with unknown static parameters based on combination of particle filtering and Simultaneous Perturbation Stochastic Approximation(SPSA)technique.The estimations of parameters are obtained by maximum-likelihood estimation and sampling within particle filtering framework,and the SPSA is used for stochastic optimization and to approximate the gradient of the cost function.The proposed algorithm achieves combined estimation of dynamic state and static parameters of nonlinear systerns.Simulation result demonstrates the feasibility and efficiency of the proposed algorithm.

  6. Optimization of control parameters of a hot cold controller by means of Simplex type methods.

    Science.gov (United States)

    Porte, C; Caron-Poussin, M; Carot, S; Couriol, C; Moreno, M M; Delacroix, A

    1997-01-01

    This paper describes a hot/cold controller for regulating crystallization operations. The system was identified with a common method (the Broida method) and the parameters were obtained by the Ziegler-Nichols method. The paper shows that this empirical method will only allow a qualitative approach to regulation and that, in some instances, the parameters obtained are unreliable and therefore cannot be used to cancel variations between the set point and the actual values. Optimization methods were used to determine the regulation parameters and solve this identcation problem. It was found that the weighted centroid method was the best one. PMID:18924791

  7. Optimization of Cutting Parameters for Face Milling Titanium Alloy Using MQL

    Institute of Scientific and Technical Information of China (English)

    AHMED Hassan; YAO Zhen-qiang

    2005-01-01

    When using MQL as a cooling technique, many parameters have to be adjusted. The Taguchi method was used in this study to investigate the cutting characteristics of face milling of titanium alloys using PVD-coated inserts. To find the optimal volume removed and surface roughness, an orthogonal array, the signal-to-noise (S/N) ratio and the analysis of variance (ANOVA) were employed. The optimum cutting parameters was obtained. Throughout this study, it was found that the feed rate is the most influencing cutting parameter in the face milling of titanium alloys.

  8. On the optimal experimental design for heat and moisture parameter estimation

    CERN Document Server

    Berger, Julien; Mendes, Nathan

    2016-01-01

    In the context of estimating material properties of porous walls based on in-site measurements and identification method, this paper presents the concept of Optimal Experiment Design (OED). It aims at searching the best experimental conditions in terms of quantity and position of sensors and boundary conditions imposed to the material. These optimal conditions ensure to provide the maximum accuracy of the identification method and thus the estimated parameters. The search of the OED is done by using the Fisher information matrix and a priori knowledge of the parameters. The methodology is applied for two case studies. The first one deals with purely conductive heat transfer. The concept of optimal experiment design is detailed and verified with 100 inverse problems for different experiment designs. The second case study combines a strong coupling between heat and moisture transfer through a porous building material. The methodology presented is based on a scientific formalism for efficient planning of experim...

  9. Optimization of FIR Digital Filters Using a Real Parameter Parallel Genetic Algorithm and Implementations.

    Science.gov (United States)

    Xu, Dexiang

    This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.

  10. Optimization of process parameters for production of volatile fatty acid, biohydrogen and methane from anaerobic digestion.

    Science.gov (United States)

    Khan, M A; Ngo, H H; Guo, W S; Liu, Y; Nghiem, L D; Hai, F I; Deng, L J; Wang, J; Wu, Y

    2016-11-01

    The anaerobic digestion process has been primarily utilized for methane containing biogas production over the past few years. However, the digestion process could also be optimized for producing volatile fatty acids (VFAs) and biohydrogen. This is the first review article that combines the optimization approaches for all three possible products from the anaerobic digestion. In this review study, the types and configurations of the bioreactor are discussed for each type of product. This is followed by a review on optimization of common process parameters (e.g. temperature, pH, retention time and organic loading rate) separately for the production of VFA, biohydrogen and methane. This review also includes additional parameters, treatment methods or special additives that wield a significant and positive effect on production rate and these products' yield.

  11. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    International Nuclear Information System (INIS)

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value

  12. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images.

    Science.gov (United States)

    Wojciechowski, Wadim; Molka, Adrian; Tabor, Zbisław

    2016-03-01

    Measurement of the deformation of the lower limbs in the current standard full-limb X-rays images presents significant challenges to radiologists and orthopedists. The precision of these measurements is deteriorated because of inexact positioning of the leg during image acquisition, problems with selecting reliable anatomical landmarks in projective X-ray images, and inevitable errors of manual measurements. The influence of the random errors resulting from the last two factors on the precision of the measurement can be reduced if an automated measurement method is used instead of a manual one. In the paper a framework for an automated measurement of various metric and angular quantities used in the description of the lower extremity deformation in full-limb frontal X-ray images is described. The results of automated measurements are compared with manual measurements. These results demonstrate that an automated method can be a valuable alternative to the manual measurements.

  13. Improving flash flood forecasting with distributed hydrological model by parameter optimization

    Science.gov (United States)

    Chen, Yangbo

    2016-04-01

    In China, flash food is usually regarded as flood occured in small and medium sized watersheds with drainage area less than 200 km2, and is mainly induced by heavy rains, and occurs in where hydrological observation is lacked. Flash flood is widely observed in China, and is the flood causing the most casualties nowadays in China. Due to hydrological data scarcity, lumped hydrological model is difficult to be employed for flash flood forecasting which requires lots of observed hydrological data to calibrate model parameters. Physically based distributed hydrological model discrete the terrain of the whole watershed into a number of grid cells at fine resolution, assimilate different terrain data and precipitation to different cells, and derive model parameteris from the terrain properties, thus having the potential to be used in flash flood forecasting and improving flash flood prediction capability. In this study, the Liuxihe Model, a physically based distributed hydrological model mainly proposed for watershed flood forecasting is employed to simulate flash floods in the Ganzhou area in southeast China, and models have been set up in 5 watersheds. Model parameters have been derived from the terrain properties including the DEM, the soil type and land use type, but the result shows that the flood simulation uncertainty is high, which may be caused by parameter uncertainty, and some kind of uncertainty control is needed before the model could be used in real-time flash flood forecastin. Considering currently many Chinese small and medium sized watersheds has set up hydrological observation network, and a few flood events could be collected, it may be used for model parameter optimization. For this reason, an automatic model parameter optimization algorithm using Particle Swam Optimization(PSO) is developed to optimize the model parameters, and it has been found that model parameters optimized even only with one observed flood events could largely reduce the flood

  14. Automated Identification of the Heart Wall Throughout the Entire Cardiac Cycle Using Optimal Cardiac Phase for Extracted Features

    Science.gov (United States)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.

  15. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    CERN Document Server

    Zhang, Chuan-Xin; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-01-01

    Considering features of stellar spectral radiation and survey explorers, we established a computational model for stellar effective temperatures, detected angular parameters, and gray rates. Using known stellar flux data in some band, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177,860 stellar effective temperatures and detected angular parameters using the Midcourse Space Experiment (MSX) catalog data. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research made full use of catalog data and presented an original technique for studying stellar characteristics. It proposed a novel method for calculating stellar effective temperatures and detected angular parameters, and pro...

  16. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    Science.gov (United States)

    Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-09-01

    Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.

  17. THE IMPLEMENTATION OF TAGUCHI METHODOLOGY FOR OPTIMIZATION OF END MILLING PROCESS PARAMETER OF MILD STEEL

    Directory of Open Access Journals (Sweden)

    ANIL CHOUBEY

    2012-07-01

    Full Text Available In this paper Taguchi method is applied to find optimum process parameters for end milling while machining of mild steel. A L9 orthogonal array, taguchi method and analysis of variance (ANOVA are used to formulate the experimental layout, to analyses the effect of each parameter on the machining characteristics and to predict the optimal choice for each end milling parameter such as spindle speed, feed rate, depth of cut and width of cut, and analysed the effect of these parameter on the material removal rate (MRR and surfaceroughness (SR. Results obtained by taguchi method match with ANOVA and cutting speed are highly influencing parameter. The analysis of the taguchi method reveals that, in general the spindle speedsignificantly affects the SR, while, the feed mainly affects the MRR. Experimental results are provided to verify this approach.

  18. Computer-Assisted Optimization of Electrodeposited Hydroxyapatite Coating Parameters on Medical Alloys

    Science.gov (United States)

    Coşkun, M. İbrahim; Karahan, İsmail H.; Yücel, Yasin; Golden, Teresa D.

    2016-04-01

    CoCrMo bio-metallic alloys were coated with a hydroxyapatite (HA) film by electrodeposition using various electrochemical parameters. Response surface methodology and central composite design were used to optimize deposition parameters such as electrolyte pH, deposition potential, and deposition time. The effects of the coating parameters were evaluated within the limits of solution pH (3.66 to 5.34), deposition potential (-1.13 to -1.97 V), and deposition time (6.36 to 73.64 minutes). A 5-level-3-factor experimental plan was used to determine ideal deposition parameters. Optimum conditions for the deposition parameters of the HA coating with high in vitro corrosion performance were determined as electrolyte pH of 5.00, deposition potential of -1.8 V, and deposition time of 20 minutes.

  19. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    Science.gov (United States)

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  20. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Jun Wang

    2016-01-01

    Full Text Available This paper proposes an improved cuckoo search (ICS algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  1. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    Science.gov (United States)

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  2. Process parameters optimization for friction stir welding of RDE-40 aluminium alloy using Taguchi technique

    Institute of Scientific and Technical Information of China (English)

    A.K.LAKSHMINARAYANAN; V.BALASUBRAMANIAN

    2008-01-01

    Taguchi approach was applied to determine the most influential control factors which will yield better tensile strength of the joints of friction stir welded RDE-40 aluminium alloy. In order to evaluate the effect of process parameters such as tool rotational speed, traverse speed and axial force on tensile strength of friction stir welded RDE-40 aluminium alloy, Taguchi parametric design and optimization approach was used. Through the Taguchi parametric design approach, the optimum levels of process parameters were determined. The results indicate that the rotational speed, welding speed and axial force are the significant parameters in deciding the tensile strength of the joint. The predicted optimal value of tensile strength of friction stir welded RDE-40 aluminium alloy is 303 MPa. The results were confirmed by further experiments.

  3. Impact of Parameter Variations and Optimization on DG-PNIN Tunnel FET

    Directory of Open Access Journals (Sweden)

    Priya Jhalani

    2014-04-01

    Full Text Available The downscaling of conventional MOSFETs has come to its fundamental limits. TFETs are very attractive devices for low power applications because of their low off-current and potential for smaller sub threshold slope. In this paper, the impact of various parameter variations on the performance of a DG-PNIN Tunnel field effect transistor is investigated. In this work, variations in gate oxide material, source doping, channel doping, drain doping, pocket doping and body thickness are studied and all these parameters are optimized as performance boosters to give better current characteristics parameters. After optimization with all these performance boosters, the device has shown improved performance with increased on-current and reduced threshold voltage and the Ion/Ioff ratio is > 106 .

  4. Optimizing Friction Stir Welding via Statistical Design of Tool Geometry and Process Parameters

    Science.gov (United States)

    Blignault, C.; Hattingh, D. G.; James, M. N.

    2012-06-01

    This article considers optimization procedures for friction stir welding (FSW) in 5083-H321 aluminum alloy, via control of weld process parameters and tool design modifications. It demonstrates the potential utility of the "force footprint" (FF) diagram in providing a real-time graphical user interface (GUI) for process optimization of FSW. Multiple force, torque, and temperature responses were recorded during FS welding using 24 different tool pin geometries, and these data were statistically analyzed to determine the relative influence of a number of combinations of important process and tool geometry parameters on tensile strength. Desirability profile charts are presented, which show the influence of seven key combinations of weld process variables on tensile strength. The model developed in this study allows the weld tensile strength to be predicted for other combinations of tool geometry and process parameters to fall within an average error of 13%. General guidelines for tool profile selection and the likelihood of influencing weld tensile strength are also provided.

  5. Optimization of process parameters in CNC turning of aluminium alloy using hybrid RSM cum TLBO approach

    Science.gov (United States)

    Rudrapati, R.; Sahoo, P.; Bandyopadhyay, A.

    2016-09-01

    The main aim of the present work is to analyse the significance of turning parameters on surface roughness in computer numerically controlled (CNC) turning operation while machining of aluminium alloy material. Spindle speed, feed rate and depth of cut have been considered as machining parameters. Experimental runs have been conducted as per Box-Behnken design method. After experimentation, surface roughness is measured by using stylus profile meter. Factor effects have been studied through analysis of variance. Mathematical modelling has been done by response surface methodology, to made relationships between the input parameters and output response. Finally, process optimization has been made by teaching learning based optimization (TLBO) algorithm. Predicted turning condition has been validated through confirmatory experiment.

  6. Connecting parameters optimization on unsymmetrical twin-tower structure linked by sky-bridge

    Institute of Scientific and Technical Information of China (English)

    孙黄胜; 刘默涵; 朱宏平

    2014-01-01

    Based on a simplified 3-DOF model of twin-tower structure linked by a sky-bridge, the frequency response functions, the displacement power spectral density (PSD) functions, and the time-averaged total vibration energy were derived, by assuming the white noise as the earthquake excitation. The effects of connecting parameters, such as linking stiffness ratio and linking damping ratio, on the structural vibration responses were then studied, and the optimal connecting parameters were obtained to minimize the vibration energy of either the independent monomer tower or the integral structure. The influences of sky-bridge elevation position on the optimal connecting parameters were also discussed. Finally, the distribution characteristics of the top displacement PSD and the structural responses, excited by El Centro, Taft and artificial waves, were compared in both frequency and time domain. It is found that the connecting parameters at either end of connection interactively affect the responses of the towers. The optimal connecting parameters can greatly improve the damping connections on their seismic reduction effectiveness, but are unable to reduce the seismic responses of the towers to the best extent simultaneously. It is also indicated that the optimal connecting parameters derived from the simplified 3-DOF model are applicable for two multi-story structures linked by a sky-bridge with dampers. The seismic reduction effectiveness obtained varies from 0.3 to 1.0 with different sky-bridge mass ratio. The displacement responses of the example structures are reduced by approximately 22% with sky-bridge connections.

  7. Statistical Learning in Automated Troubleshooting: Application to LTE Interference Mitigation

    CERN Document Server

    Tiwana, Moazzam Islam; Altman, Zwi

    2010-01-01

    This paper presents a method for automated healing as part of off-line automated troubleshooting. The method combines statistical learning with constraint optimization. The automated healing aims at locally optimizing radio resource management (RRM) or system parameters of cells with poor performance in an iterative manner. The statistical learning processes the data using Logistic Regression (LR) to extract closed form (functional) relations between Key Performance Indicators (KPIs) and Radio Resource Management (RRM) parameters. These functional relations are then processed by an optimization engine which proposes new parameter values. The advantage of the proposed formulation is the small number of iterations required by the automated healing method to converge, making it suitable for off-line implementation. The proposed method is applied to heal an Inter-Cell Interference Coordination (ICIC) process in a 3G Long Term Evolution (LTE) network which is based on soft-frequency reuse scheme. Numerical simulat...

  8. GPRS and Bluetooth Based Devices/Mobile Connectivity Shifting From Manual To Automation For Performance Optimization

    Directory of Open Access Journals (Sweden)

    Nazia Bibi

    2011-09-01

    Full Text Available Many companies/organizations are trying to move towards automation and provide their workers with the internet facility on their mobile in order to carry out their routine tasks to save time and resources. The proposed system is based on GPRS technology aims to provide a solution to problem faced in carryout routine tasks considering mobility. The system is designed in a way that facilitates Workers/field staff get updates on their mobile phone regarding tasks at hand. This System is beneficial in a sense that it saves resources in term of time, human resources and cuts down the paper work. The proposed system has been developed in view of research study conducted in the software development and telecom industry and provides a high end solution to the customers/fieldworkers that use GPRS technology for transactions updates of databases.

  9. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  10. Dynamic artificial bee colony algorithm for multi-parameters optimization of support vector machine-based soft-margin classifier

    Science.gov (United States)

    Yan, Yiming; Zhang, Ye; Gao, Fengjiao

    2012-12-01

    This article proposes a `dynamic' artificial bee colony (D-ABC) algorithm for solving optimizing problems. It overcomes the poor performance of artificial bee colony (ABC) algorithm, when applied to multi-parameters optimization. A dynamic `activity' factor is introduced to D-ABC algorithm to speed up convergence and improve the quality of solution. This D-ABC algorithm is employed for multi-parameters optimization of support vector machine (SVM)-based soft-margin classifier. Parameter optimization is significant to improve classification performance of SVM-based classifier. Classification accuracy is defined as the objection function, and the many parameters, including `kernel parameter', `cost factor', etc., form a solution vector to be optimized. Experiments demonstrate that D-ABC algorithm has better performance than traditional methods for this optimizing problem, and better parameters of SVM are obtained which lead to higher classification accuracy.

  11. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Science.gov (United States)

    Churchill, Nathan W; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline") significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  12. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Directory of Open Access Journals (Sweden)

    Nathan W Churchill

    Full Text Available BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline" significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each, demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  13. A Parallel Genetic Algorithm Based Feature Selection and Parameter Optimization for Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Zhi Chen

    2016-01-01

    Full Text Available The extensive applications of support vector machines (SVMs require efficient method of constructing a SVM classifier with high classification ability. The performance of SVM crucially depends on whether optimal feature subset and parameter of SVM can be efficiently obtained. In this paper, a coarse-grained parallel genetic algorithm (CGPGA is used to simultaneously optimize the feature subset and parameters for SVM. The distributed topology and migration policy of CGPGA can help find optimal feature subset and parameters for SVM in significantly shorter time, so as to increase the quality of solution found. In addition, a new fitness function, which combines the classification accuracy obtained from bootstrap method, the number of chosen features, and the number of support vectors, is proposed to lead the search of CGPGA to the direction of optimal generalization error. Experiment results on 12 benchmark datasets show that our proposed approach outperforms genetic algorithm (GA based method and grid search method in terms of classification accuracy, number of chosen features, number of support vectors, and running time.

  14. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    Science.gov (United States)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  15. Optimization of Nd:YAG laser welding parameters for sealing small titanium tube ends

    International Nuclear Information System (INIS)

    The purpose of the present study is to optimize Nd:YAG laser welding parameters to seal an iodine-125 radioisotope seed into a titanium capsule. If the end of a small titanium tube is irradiated to a Nd:YAG laser beam and melted down to the adequate length, it will be coalesced and sealed. The accurate control of the melted length of the tube end was the most important to obtain a sound sealed state. The effects of the laser welding parameters on the melted length were analyzed and optimized by the Taguchi and regression analysis method. The laser pulse width and focal position among the welding parameters had the greatest effects on the S/N ratio of the melted length. Optimal welding conditions were obtained at 0.86 ms of the pulse width and 3.18-3.35 mm of the focal position in the scope of the experiments. Confirmation experiments were conducted at the optimal welding conditions, and showed that both of the titanium tube ends were sealed soundly

  16. [Simulation of vegetation indices optimizing under retrieval of vegetation biochemical parameters based on PROSPECT + SAIL model].

    Science.gov (United States)

    Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng

    2012-12-01

    This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.

  17. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  18. Optimization of thread partitioning parameters in speculative multithreading based on artificial immune algorithm

    Institute of Scientific and Technical Information of China (English)

    Yu-xiang LI; Yin-liang ZHAO‡; Bin LIU; Shuo JI

    2015-01-01

    Thread partition plays an important role in speculative multithreading (SpMT) for automatic parallelization of ir-regular programs. Using unified values of partition parameters to partition different applications leads to the fact that every ap-plication cannot own its optimal partition scheme. In this paper, five parameters affecting thread partition are extracted from heuristic rules. They are the dependence threshold (DT), lower limit of thread size (TSL), upper limit of thread size (TSU), lower limit of spawning distance (SDL), and upper limit of spawning distance (SDU). Their ranges are determined in accordance with heuristic rules, and their step-sizes are set empirically. Under the condition of setting speedup as an objective function, all com-binations of five threshold values form the solution space, and our aim is to search for the best combination to obtain the best thread granularity, thread dependence, and spawning distance, so that every application has its best partition scheme. The issue can be attributed to a single objective optimization problem. We use the artificial immune algorithm (AIA) to search for the optimal solution. On Prophet, which is a generic SpMT processor to evaluate the performance of multithreaded programs, Olden bench-marks are used to implement the process. Experiments show that we can obtain the optimal parameter values for every benchmark, and Olden benchmarks partitioned with the optimized parameter values deliver a performance improvement of 3.00%on a 4-core platform compared with a machine learning based approach, and 8.92%compared with a heuristics-based approach.

  19. Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters

    Science.gov (United States)

    Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.

    2016-06-01

    Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent

  20. EXPERIMENTAL TWO-FACTOR OPTIMIZATION OF PARAMETERS OF THE UNIVERSAL SOIL-PROCESSING INSTRUMENT

    Directory of Open Access Journals (Sweden)

    Popov I. V.

    2015-10-01

    Full Text Available The article considers the problem of reforestation on of processing stations, such as coupe, gully and mountain slopes. To improve the efficiency of the planting work proposed a construction of universal soil-processing instrument (USPI, is able to form discrete planting spot in the form of spot mounding in conditions temporarily humid soil or spot area (with removal of the top layer on drained soils with simultaneous formation of planting cup for planting of forest plantation. For assessing effectiveness of his work, there was developed an experimental sample of the USPI and conducted its field trials. During the two-factor solution of the problem of optimization of the performance of the USPI there were selected optimization criteria, namely performance, quality and economic feasibility of work instrument, as well as the varied parameters exerting the most influence. To detection the analytical dependences between these parameters, we have performed a series of nine experiments, performed the approximation of functions by polynomials of second order. The result was obtained analytical formulas characterizing the influence of the varied parameters of the USPI on the quality of his work. Also, we have found graphical surfaces response and performed a visual analysis , which allowed determining the optimal values of the varied parameters of the USPI

  1. Effect of experimental parameters on optimal reflection of light from opaque media

    Science.gov (United States)

    Anderson, Benjamin R.; Gunawidjaja, Ray; Eilers, Hergen

    2016-01-01

    Previously we considered the effect of experimental parameters on optimized transmission through opaque media using spatial light modulator (SLM)-based wavefront shaping. In this study we consider the opposite geometry, in which we optimize reflection from an opaque surface such that the backscattered light is focused onto a spot on an imaging detector. By systematically varying different experimental parameters (genetic algorithm iterations, bin size, SLM active area, target area, spot size, and sample angle with respect to the optical axis) and optimizing the reflected light we determine how each parameter affects the intensity enhancement. We find that the effects of the experimental parameters on the enhancement are similar to those measured for a transmissive geometry, but with the exact functional forms changed due to the different geometry and the use of a genetic algorithm instead of an iterative algorithm. Additionally, we find preliminary evidence of greater enhancements than predicted by random matrix theory, suggesting a possibly new physical mechanism to be investigated in future work.

  2. Fully automated molecular biology routines on a plasmid-based functional proteomic workcell: Evaluation and Characterization of Yeast Strains Optimized for Growth on Xylose Expressing "Stealth" Insecticidal Peptides.

    Science.gov (United States)

    Optimization of genes important to production of fuel ethanol from hemicellulosic biomass for use in developing improved commercial yeast strains is necessary to meet the rapidly expanding need for ethanol. The United States Department of Agriculture has developed a fully automated platform for mol...

  3. Prediction and optimization of friction welding parameters for super duplex stainless steel (UNS S32760) joints

    International Nuclear Information System (INIS)

    Highlights: • Corrosion resistance and impact strength – predicted by response surface methodology. • Burn off length has highest significance on corrosion resistance. • Friction force is a strong determinant in changing impact strength. • Pareto front points generated by genetic algorithm aid to fix input control variable. • Pareto front will be a trade-off between corrosion resistance and impact strength. - Abstract: Friction welding finds widespread industrial use as a mass production process for joining materials. Friction welding process allows welding of several materials that are extremely difficult to fusion weld. Friction welding process parameters play a significant role in making good quality joints. To produce a good quality joint it is important to set up proper welding process parameters. This can be done by employing optimization techniques. This paper presents a multi objective optimization method for optimizing the process parameters during friction welding process. The proposed method combines the response surface methodology (RSM) with an intelligent optimization algorithm, i.e. genetic algorithm (GA). Corrosion resistance and impact strength of friction welded super duplex stainless steel (SDSS) (UNS S32760) joints were investigated considering three process parameters: friction force (F), upset force (U) and burn off length (B). Mathematical models were developed and the responses were adequately predicted. Direct and interaction effects of process parameters on responses were studied by plotting graphs. Burn off length has high significance on corrosion current followed by upset force and friction force. In the case of impact strength, friction force has high significance followed by upset force and burn off length. Multi objective optimization for maximizing the impact strength and minimizing the corrosion current (maximizing corrosion resistance) was carried out using GA with the RSM model. The optimization procedure resulted in

  4. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul;

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  5. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1993-01-01

    The design of a measurement program devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All the...... calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensory can be estimated. This is shown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement...

  6. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  7. Multi-parameter optimization of a nanomagnetic system for spintronic applications

    Energy Technology Data Exchange (ETDEWEB)

    Morales Meza, Mishel [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Zubieta Rico, Pablo F. [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Centro de Investigación y de Estudios Avanzados del IPN (CINVESTAV) Querétaro, Libramiento Norponiente 2000, Fracc. Real de Juriquilla, 76230 Querétaro (Mexico); Horley, Paul P., E-mail: paul.horley@cimav.edu.mx [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Sukhov, Alexander [Institut für Physik, Martin-Luther Universität Halle-Wittenberg, 06120 Halle (Saale) (Germany); Vieira, Vítor R. [Centro de Física das Interacções Fundamentais (CFIF), Instituto Superior Técnico, Universidade Técnica de Lisboa, Avenida Rovisco Pais, 1049-001 Lisbon (Portugal)

    2014-11-15

    Magnetic properties of nano-particles feature many interesting physical phenomena that are essentially important for the creation of a new generation of spin-electronic devices. The magnetic stability of the nano-particles can be improved by formation of ordered particle arrays, which should be optimized over several parameters. Here we report successful optimization regarding inter-particle distance and applied field frequency allowing to obtain about three-times reduction of coercivity of a particle array compared to that of a single particle, which opens new perspectives for development of new spintronic devices.

  8. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    Science.gov (United States)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  9. Extraction of Cole parameters from the electrical bioimpedance spectrum using stochastic optimization algorithms.

    Science.gov (United States)

    Gholami-Boroujeny, Shiva; Bolic, Miodrag

    2016-04-01

    Fitting the measured bioimpedance spectroscopy (BIS) data to the Cole model and then extracting the Cole parameters is a common practice in BIS applications. The extracted Cole parameters then can be analysed as descriptors of tissue electrical properties. To have a better evaluation of physiological or pathological properties of biological tissue, accurate extraction of Cole parameters is of great importance. This paper proposes an improved Cole parameter extraction based on bacterial foraging optimization (BFO) algorithm. We employed simulated datasets to test the performance of the BFO fitting method regarding parameter extraction accuracy and noise sensitivity, and we compared the results with those of a least squares (LS) fitting method. The BFO method showed better robustness to the noise and higher accuracy in terms of extracted parameters. In addition, we applied our method to experimental data where bioimpedance measurements were obtained from forearm in three different positions of the arm. The goal of the experiment was to explore how robust Cole parameters are in classifying position of the arm for different people, and measured at different times. The extracted Cole parameters obtained by LS and BFO methods were applied to different classifiers. Two other evolutionary algorithms, GA and PSO were also used for comparison purpose. We showed that when the classifiers are fed with the extracted feature sets by BFO fitting method, higher accuracy is obtained both when applying on training data and test data. PMID:26215520

  10. An Optimized Clustering Approach for Automated Detection of White Matter Lesions in MRI Brain Images

    Directory of Open Access Journals (Sweden)

    M. Anitha

    2012-04-01

    Full Text Available Settings White Matter lesions (WMLs are small areas of dead cells found in parts of the brain. In general, it is difficult for medical experts to accurately quantify the WMLs due to decreased contrast between White Matter (WM and Grey Matter (GM. The aim of this paper is to
    automatically detect the White Matter Lesions which is present in the brains of elderly people. WML detection process includes the following stages: 1. Image preprocessing, 2. Clustering (Fuzzy c-means clustering, Geostatistical Possibilistic clustering and Geostatistical Fuzzy clustering and 3.Optimization using Particle Swarm Optimization (PSO. The proposed system is tested on a database of 208 MRI images. GFCM yields high sensitivity of 89%, specificity of 94% and overall accuracy of 93% over FCM and GPC. The clustered brain images are then subjected to Particle Swarm Optimization (PSO. The optimized result obtained from GFCM-PSO provides sensitivity of 90%, specificity of 94% and accuracy of 95%. The detection results reveals that GFCM and GFCMPSO better localizes the large regions of lesions and gives less false positive rate when compared to GPC and GPC-PSO which captures the largest loads of WMLs only in the upper ventral horns of the brain.

  11. SWANS: A Prototypic SCALE Criticality Sequence for Automated Optimization Using the SWAN Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Greenspan, E.

    2001-01-11

    SWANS is a new prototypic analysis sequence that provides an intelligent, semi-automatic search for the maximum k{sub eff} of a given amount of specified fissile material, or of the minimum critical mass. It combines the optimization strategy of the SWAN code with the composition-dependent resonance self-shielded cross sections of the SCALE package. For a given system composition arrived at during the iterative optimization process, the value of k{sub eff} is as accurate and reliable as obtained using the CSAS1X Sequence of SCALE-4.4. This report describes how SWAN is integrated within the SCALE system to form the new prototypic optimization sequence, describes the optimization procedure, provides a user guide for SWANS, and illustrates its application to five different types of problems. In addition, the report illustrates that resonance self-shielding might have a significant effect on the maximum k{sub eff} value a given fissile material mass can have.

  12. Automated Response Surface Methodology for Stochastic Optimization Models with Unknown Variance

    NARCIS (Netherlands)

    R.P. Nicolai (Robin); R. Dekker (Rommert)

    2005-01-01

    textabstractResponse Surface Methodology (RSM) is a tool that was introduced in the early 50´s by Box and Wilson (1951). It is a collection of mathematical and statistical techniques useful for the approximation and optimization of stochastic models. Applications of RSM can be found in e.g. chemical

  13. Kernel Function and Parameters Optimization in KICA for Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2013-08-01

    Full Text Available Kernel independent component analysis (KICA is a blind signal separation method which has a good effect for the treatment of non-linear signal. For introducing kernel techniques, the choices of kernel function and its kernel parameter have a great influence on the analytic results. A kernel function and its parameters optimization method is proposed on the basis of the similarity of source fault signals and kernel independent component. The similarity parameter is proposed to verify the merits or defects of KICA by using different kernel function and parameters. The simulation studies are processed, and the simulation conclusion is verified by the actual diagnostic case. These provide guidance for the application of the KICA method in the mechanical fault diagnosis.

  14. IDENTIFICATION OF OPTIMAL PARAMETERS OF REINFORCED CONCRETE STRUCTURES WITH ACCOUNT FOR THE PROBABILITY OF FAILURE

    Directory of Open Access Journals (Sweden)

    Filimonova Ekaterina Aleksandrovna

    2012-10-01

    The author suggests splitting the aforementioned parameters into the two groups, namely, natural parameters and value-related parameters that are introduced to assess the costs of development, transportation, construction and operation of a structure, as well as the costs of its potential failure. The author proposes a new improved methodology for the identification of the above parameters that ensures optimal solutions to non-linear objective functions accompanied by non-linear restrictions that are critical to the design of reinforced concrete structures. Any structural failure may be interpreted as the bounce of a random process associated with the surplus bearing capacity into the negative domain. Monte Carlo numerical methods make it possible to assess these bounces into the unacc eptable domain.

  15. Application of Powell's optimization method to surge arrester circuit models' parameters

    Energy Technology Data Exchange (ETDEWEB)

    Christodoulou, C.A.; Stathopulos, I.A. [National Technical University of Athens, School of Electrical and Computer Engineering, 9 Iroon Politechniou St., Zografou Campus, 157 80 Athens (Greece); Vita, V.; Ekonomou, L.; Chatzarakis, G.E. [A.S.PE.T.E. - School of Pedagogical and Technological Education, Department of Electrical Engineering Educators, N. Heraklion, 141 21 Athens (Greece)

    2010-08-15

    Powell's optimization method has been used for the evaluation of the surge arrester models parameters. The proper modelling of metal-oxide surge arresters and the right selection of equivalent circuit parameters are very significant issues, since quality and reliability of lightning performance studies can be improved with the more efficient representation of the arresters' dynamic behavior. The proposed approach selects optimum arrester model equivalent circuit parameter values, minimizing the error between the simulated peak residual voltage value and this given by the manufacturer. Application of the method in performed on a 120 kV metal oxide arrester. The use of the obtained optimum parameter values reduces significantly the relative error between the simulated and manufacturer's peak residual voltage value, presenting the effectiveness of the method. (author)

  16. OPTIMIZATION OF PROCESS PARAMETERS TO MINIMIZE ANGULAR DISTORTION IN GAS TUNGSTEN ARC WELDED STAINLESS STEEL 202 GRADE PLATES USING PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    R. SUDHAKARAN

    2012-04-01

    Full Text Available This paper presents a study on optimization of process parameters using particle swarm optimization to minimize angular distortion in 202 grade stainless steel gas tungsten arc welded plates. Angular distortion is a major problem and most pronounced among different types of distortion in butt welded plates. The process control parameters chosen for the study are welding gun angle, welding speed, plate length, welding current and gas flow rate. The experiments were conducted using design of experiments technique with five factor five level central composite rotatable design with full replication technique. A mathematical model was developed correlating the process parameters with angular distortion. A source code was developed in MATLAB 7.6 to do the optimization. The optimal process parameters gave a value of 0.0305° for angular distortion which demonstrates the accuracy of the model developed. The results indicate that the optimized values for the process parameters are capable of producing weld with minimum distortion.

  17. A comparison between gradient descent and stochastic approaches for parameter optimization of a coupled ocean-sea ice model

    Science.gov (United States)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2012-11-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model, and applicability and efficiency of the respective methods were examined. One is a finite difference method based on a traditional gradient descent approach, while the other adopts genetic algorithms as an example of stochastic approaches. Several series of parameter optimization experiments were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. The finite difference method fails to estimate optimal parameters due to an ill-shaped nature of the cost function, whereas the genetic algorithms can effectively estimate near optimal parameters with a practical number of iterations. The results of the study indicate that a sophisticated stochastic approach is of practical use to a parameter optimization of a coupled ocean-sea ice model.

  18. Laboratory automation in clinical bacteriology: what system to choose?

    Science.gov (United States)

    Croxatto, A; Prod'hom, G; Faverjon, F; Rochais, Y; Greub, G

    2016-03-01

    Automation was introduced many years ago in several diagnostic disciplines such as chemistry, haematology and molecular biology. The first laboratory automation system for clinical bacteriology was released in 2006, and it rapidly proved its value by increasing productivity, allowing a continuous increase in sample volumes despite limited budgets and personnel shortages. Today, two major manufacturers, BD Kiestra and Copan, are commercializing partial or complete laboratory automation systems for bacteriology. The laboratory automation systems are rapidly evolving to provide improved hardware and software solutions to optimize laboratory efficiency. However, the complex parameters of the laboratory and automation systems must be considered to determine the best system for each given laboratory. We address several topics on laboratory automation that may help clinical bacteriologists to understand the particularities and operative modalities of the different systems. We present (a) a comparison of the engineering and technical features of the various elements composing the two different automated systems currently available, (b) the system workflows of partial and complete laboratory automation, which define the basis for laboratory reorganization required to optimize system efficiency, (c) the concept of digital imaging and telebacteriology, (d) the connectivity of laboratory automation to the laboratory information system, (e) the general advantages and disadvantages as well as the expected impacts provided by laboratory automation and (f) the laboratory data required to conduct a workflow assessment to determine the best configuration of an automated system for the laboratory activities and specificities. PMID:26806135

  19. Deployable reflector antenna performance optimization using automated surface correction and array-feed compensation

    Science.gov (United States)

    Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.

    1992-01-01

    Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.

  20. Process optimization and biocompatibility of cell carriers suitable for automated magnetic manipulation.

    Science.gov (United States)

    Krejci, I; Piana, C; Howitz, S; Wegener, T; Fiedler, S; Zwanzig, M; Schmitt, D; Daum, N; Meier, K; Lehr, C M; Batista, U; Zemljic, S; Messerschmidt, J; Franzke, J; Wirth, M; Gabor, F

    2012-03-01

    There is increasing demand for automated cell reprogramming in the fields of cell biology, biotechnology and the biomedical sciences. Microfluidic-based platforms that provide unattended manipulation of adherent cells promise to be an appropriate basis for cell manipulation. In this study we developed a magnetically driven cell carrier to serve as a vehicle within an in vitro environment. To elucidate the impact of the carrier on cells, biocompatibility was estimated using the human adenocarcinoma cell line Caco-2. Besides evaluation of the quality of the magnetic carriers by field emission scanning electron microscopy, the rate of adherence, proliferation and differentiation of Caco-2 cells grown on the carriers was quantified. Moreover, the morphology of the cells was monitored by immunofluorescent staining. Early generations of the cell carrier suffered from release of cytotoxic nickel from the magnetic cushion. Biocompatibility was achieved by complete encapsulation of the nickel bulk within galvanic gold. The insulation process had to be developed stepwise and was controlled by parallel monitoring of the cell viability. The final carrier generation proved to be a proper support for cell manipulation, allowing proliferation of Caco-2 cells equal to that on glass or polystyrene as a reference for up to 10 days. Functional differentiation was enhanced by more than 30% compared with the reference. A flat, ferromagnetic and fully biocompatible carrier for cell manipulation was developed for application in microfluidic systems. Beyond that, this study offers advice for the development of magnetic cell carriers and the estimation of their biocompatibility.

  1. Novel Gauss-Hermite integration based Bayesian inference on optimal wavelet parameters for bearing fault diagnosis

    Science.gov (United States)

    Wang, Dong; Tsui, Kwok-Leung; Zhou, Qiang

    2016-05-01

    Rolling element bearings are commonly used in machines to provide support for rotating shafts. Bearing failures may cause unexpected machine breakdowns and increase economic cost. To prevent machine breakdowns and reduce unnecessary economic loss, bearing faults should be detected as early as possible. Because wavelet transform can be used to highlight impulses caused by localized bearing faults, wavelet transform has been widely investigated and proven to be one of the most effective and efficient methods for bearing fault diagnosis. In this paper, a new Gauss-Hermite integration based Bayesian inference method is proposed to estimate the posterior distribution of wavelet parameters. The innovations of this paper are illustrated as follows. Firstly, a non-linear state space model of wavelet parameters is constructed to describe the relationship between wavelet parameters and hypothetical measurements. Secondly, the joint posterior probability density function of wavelet parameters and hypothetical measurements is assumed to follow a joint Gaussian distribution so as to generate Gaussian perturbations for the state space model. Thirdly, Gauss-Hermite integration is introduced to analytically predict and update moments of the joint Gaussian distribution, from which optimal wavelet parameters are derived. At last, an optimal wavelet filtering is conducted to extract bearing fault features and thus identify localized bearing faults. Two instances are investigated to illustrate how the proposed method works. Two comparisons with the fast kurtogram are used to demonstrate that the proposed method can achieve better visual inspection performances than the fast kurtogram.

  2. Optimizing parameters of a technical system using quality function deployment method

    Science.gov (United States)

    Baczkowicz, M.; Gwiazda, A.

    2015-11-01

    The article shows the practical use of Quality Function Deployment (QFD) on the example of a mechanized mining support. Firstly it gives a short description of this method and shows how the designing process, from the constructor point of view, looks like. The proposed method allows optimizing construction parameters and comparing them as well as adapting to customer requirements. QFD helps to determine the full set of crucial construction parameters and then their importance and difficulty of their execution. Secondly it shows chosen technical system and presents its construction with figures of the existing and future optimized model. The construction parameters were selected from the designer point of view. The method helps to specify a complete set of construction parameters, from the point of view, of the designed technical system and customer requirements. The QFD matrix can be adjusted depending on designing needs and not every part of it has to be considered. Designers can choose which parts are the most important. Due to this QFD can be a very flexible tool. The most important is to define relationships occurring between parameters and that part cannot be eliminated from the analysis.

  3. Parameter Estimation for Coupled Hydromechanical Simulation of Dynamic Compaction Based on Pareto Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2015-01-01

    Full Text Available This paper presented a parameter estimation method based on a coupled hydromechanical model of dynamic compaction and the Pareto multiobjective optimization technique. The hydromechanical model of dynamic compaction is established in the FEM program LS-DYNA. The multiobjective optimization algorithm, Nondominated Sorted Genetic Algorithm (NSGA-IIa, is integrated with the numerical model to identify soil parameters using multiple sources of field data. A field case study is used to demonstrate the capability of the proposed method. The observed pore water pressure and crater depth at early blow of dynamic compaction are simultaneously used to estimate the soil parameters. Robustness of the back estimated parameters is further illustrated by a forward prediction. Results show that the back-analyzed soil parameters can reasonably predict lateral displacements and give generally acceptable predictions of dynamic compaction for an adjacent location. In addition, for prediction of ground response of the dynamic compaction at continuous blows, the prediction based on the second blow is more accurate than the first blow due to the occurrence of the hardening and strengthening of soil during continuous compaction.

  4. TECHNICAL AND ENERGY PARAMETERS IMPROVEMENT OF DIESEL LOCOMOTIVES THROUGH THE INTRODUCTION OF AUTOMATED CONTROL SYSTEMS OF A DIESEL

    Directory of Open Access Journals (Sweden)

    M. I. Kapitsa

    2015-04-01

    Full Text Available Purpose. Today the issue, connected with diesel traction remains relevant for the majority of industrial enterprises and Ukrainian railways and diesel engine continues to be the subject of extensive research and improvements. Despite the intensive process of electrification, which accompanies Railway Transport of Ukraine the last few years, diesel traction continues to play an important role both in the main and in the industrial railway traction rolling stock. Anyway, all kinds of maneuvering and chores are for locomotives, they are improved and upgraded relentlessly and hourly. This paper is focused on finding the opportunities to improve technical and energy parameters of diesels due to the development of modern control method of the fuel equipment in the diesel engine. Methodology. The proposed method increases the power of locomotives diesel engines in the range of crankshaft rotation (from idle running to maximum one. It was based on approach of mixture ignition timing up to the top «dead» center of piston position. Findings. The paper provides a brief historical background of research in the area of operating cycle in the internal combustion engine (ICE. The factors affecting the process of mixing and its quality were analyzed. The requirements for fuel feed system in to the cylinder and the «weak points» of the process were presented. A variant of the modification the fuel pump drive, which allows approaching to the regulation of fuel feed system from the other hand and to improve it was proposed. Represents a variant of embodiment of the complex system with specification of mechanical features and control circuits. The algorithm of the system operation was presented and its impact on the performance of diesel was made. Originality. The angle regulating system of fuel supply allows automating the process of fuel injection advance angle into the cylinder. Practical value. At implementation the angle regulating system of fuel supply

  5. The Optimal Extraction Parameters and Anti-Diabetic Activity of Flavonoids from Ipomoea Batatas Leaf

    OpenAIRE

    Li, Fenglin; Li, Qingwang; Gao, Dawei; Peng, Yong

    2009-01-01

    Extraction parameters of flavonoids from Ipomoea batatas leaf (FIBL) and anti-diabetic activity of FIBL on alloxan induced diabetic mice were studied. The optimal extraction parameters of FIBL were obtained by single factor test and orthogonal test, as follows: ethanol concentration 60 %, ratio of solvent to raw material 30, extraction temperature 75 ° and extraction time 1.5 h, while extraction yield of FIBL was 5.94 %. FIBL treatment (50, 100, and 150 mg/ kg body weight) for 28 days resulte...

  6. Parameter Optimization of a 9 × 9 Polymer Arrayed Waveguide Grating Multiplexer

    Institute of Scientific and Technical Information of China (English)

    郭文滨; 马春生; 陈维友; 张大明; 陈开鑫; 崔战臣; 赵禹; 刘式墉

    2002-01-01

    Some important parameters are optimized for a 9 × 9 polymer arrayed waveguide grating multiplexer around the central wavelength of 1.55μm with the wavelength spacing of 1.6nm. These parameters include the thickness and width of the guide core, diffraction order, pitch of adjacent waveguides, path length difference of adjacent arrayed waveguides, focal length of slab waveguides, free spectral range, the number of input/output channels and the number of arrayed waveguides. Finally, a schematic waveguide layout of this device is presented, which contains 2 slabs, 9 input and 9 output channels, and 91 arrayed waveguides.

  7. Optimization of Squeeze Casting Parameters for 2017 A Wrought Al Alloy Using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Najib Souissi

    2014-04-01

    Full Text Available This study applies the Taguchi method to investigate the relationship between the ultimate tensile strength, hardness and process variables in a squeeze casting 2017 A wrought aluminium alloy. The effects of various casting parameters including squeeze pressure, melt temperature and die temperature were studied. Therefore, the objectives of the Taguchi method for the squeeze casting process are to establish the optimal combination of process parameters and to reduce the variation in quality between only a few experiments. The experimental results show that the squeeze pressure significantly affects the microstructure and the mechanical properties of 2017 A Al alloy.

  8. Data set of optimal parameters for colorimetric red assay of epoxide hydrolase activity.

    Science.gov (United States)

    de Oliveira, Gabriel Stephani; Adriani, Patricia Pereira; Borges, Flavia Garcia; Lopes, Adriana Rios; Campana, Patricia T; Chambergo, Felipe S

    2016-09-01

    The data presented in this article are related to the research article entitled "Epoxide hydrolase of Trichoderma reesei: Biochemical properties and conformational characterization" [1]. Epoxide hydrolases (EHs) are enzymes that catalyze the hydrolysis of epoxides to the corresponding vicinal diols. This article describes the optimal parameters for the colorimetric red assay to determine the enzymatic activity, with an emphasis on the characterization of the kinetic parameters, pH optimum and thermal stability of this enzyme. The effects of reagents that are not resistant to oxidation by sodium periodate on the reactions can generate false positives and interfere with the final results of the red assay. PMID:27366781

  9. Slot Parameter Optimization for Multiband Antenna Performance Improvement Using Intelligent Systems

    Directory of Open Access Journals (Sweden)

    Erdem Demircioglu

    2015-01-01

    Full Text Available This paper discusses bandwidth enhancement for multiband microstrip patch antennas (MMPAs using symmetrical rectangular/square slots etched on the patch and the substrate properties. The slot parameters on MMPA are modeled using soft computing technique of artificial neural networks (ANN. To achieve the best ANN performance, Particle Swarm Optimization (PSO and Differential Evolution (DE are applied with ANN’s conventional training algorithm in optimization of the modeling performance. In this study, the slot parameters are assumed as slot distance to the radiating patch edge, slot width, and length. Bandwidth enhancement is applied to a formerly designed MMPA fed by a microstrip transmission line attached to the center pin of 50 ohm SMA connecter. The simulated antennas are fabricated and measured. Measurement results are utilized for training the artificial intelligence models. The ANN provides 98% model accuracy for rectangular slots and 97% for square slots; however, ANFIS offer 90% accuracy with lack of resonance frequency tracking.

  10. Optimization of Bending Process Parameters for Seamless Tubes Using Taguchi Method and Finite Element Method

    Directory of Open Access Journals (Sweden)

    Jui-Chang Lin

    2015-01-01

    Full Text Available The three-dimensional tube (or pipe is manufactured by CNC tube bending machine. The key techniques are determined by tube diameter, wall thickness, material, and bending radius. The obtained technique through experience and the trial and error method is unreliable. Finite element method (FEM simulation for the tube bending process before production can avoid wasting manpower and raw materials. The computer-aided engineering (CAE software ABAQUS 6.12 is applied to simulate bending characteristics and to explore the maximum stress and strain conditions. The Taguchi method is used to find the optimal parameters of bending. The confirmation experiment is performed according to optimal parameters. Results indicate that the strain error between CAE simulation and bending experiments is within 6.39%.

  11. Multi-criteria optimization of chassis parameters of Nissan 200 SX for drifting competitions

    Science.gov (United States)

    Maniowski, M.

    2016-09-01

    The work objective is to increase performance of Nissan 200sx S13 prepared for a quasi-static state of drifting on a circular path with given constant radius (R=15 m) and tyre-road friction coefficient (μ = 0.9). First, a high fidelity “miMA” multibody model of the vehicle is formulated. Then, a multicriteria optimization problem is solved with one of the goals to maximize a stable drift angle (β) of the vehicle. The decision variables contain 11 parameters of the vehicle chassis (describing the wheel suspension stiffness and geometry) and 2 parameters responsible for a driver steering and accelerator actions, that control this extreme closed-loop manoeuvre. The optimized chassis setup results in the drift angle increase by 14% from 35 to 40 deg.

  12. Optimal Experiment Design for Quantum State and Process Tomography and Hamiltonian Parameter Estimation

    CERN Document Server

    Kosut, R L; Rabitz, H; Kosut, Robert; Walmsley, Ian A.; Rabitz, Herschel

    2004-01-01

    A number of problems in quantum state and system identification are addressed. Specifically, it is shown that the maximum likelihood estimation (MLE) approach, already known to apply to quantum state tomography, is also applicable to quantum process tomography (estimating the Kraus operator sum representation (OSR)), Hamiltonian parameter estimation, and the related problems of state and process (OSR) distribution estimation. Except for Hamiltonian parameter estimation, the other MLE problems are formally of the same type of convex optimization problem and therefore can be solved very efficiently to within any desired accuracy. Associated with each of these estimation problems, and the focus of the paper, is an optimal experiment design (OED) problem invoked by the Cramer-Rao Inequality: find the number of experiments to be performed in a particular system configuration to maximize estimation accuracy; a configuration being any number of combinations of sample times, hardware settings, prepared initial states...

  13. Parameters estimation online for Lorenz system by a novel quantum-behaved particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    Gao Fei; Li Zhuo-Qiu; Tong Heng-Qing

    2008-01-01

    This paper proposes a novel quantum-behaved particle swarm optimization (NQPSO) for the estimation of chaos'unknown parameters by transforming them into nonlinear functions' optimization. By means of the techniques in the following three aspects: contracting the searching space self-adaptively; boundaries restriction strategy; substituting the particles' convex combination for their centre of mass, this paper achieves a quite effective search mechanism with fine equilibrium between exploitation and exploration. Details of applying the proposed method and other methods into Lorenz systems axe given, and experiments done show that NQPSO has better adaptability, dependability and robustness. It is a successful approach in unknown parameter estimation online especially in the cases with white noises.

  14. Physiochemical parameters optimization for enhanced nisin production by Lactococcus lactis (MTCC 440

    Directory of Open Access Journals (Sweden)

    Puspadhwaja Mall

    2010-02-01

    Full Text Available The influence of various physiochemical parameters on the growth of Lactococcus lactis sub sp. lactis MTCC 440 was studied at shake flask level for 20 h. Media optimization (MRS broth was studied to achieve enhanced growth of the organism and also nisin production. Bioassay of nisin was done with agar diffusion method using Streptococcus agalactae NCIM 2401 as indicator strain. MRS broth (6%, w/v with 0.15μg/ml of nisin supplemented with 0.5% (v/v skimmed milk was found to be the best for nisin production as well as for growth of L lactis. The production of nisin was strongly influenced by the presence of skimmed milk and nisin in MRS broth. The production of nisin was affected by the physical parameters and maximum nisin production was at 30(0C while the optimal temperature for biomass production was 37(0C.

  15. Optimization of the Geometrical Parameters of a Solar Bubble Pump for Absorption-Diffusion Cooling Systems

    Directory of Open Access Journals (Sweden)

    N. Dammak

    2010-01-01

    Full Text Available Problem statement: The objective of this study was to optimize the geometrical parameters of a bubble pump integrated in a solar flat plate collector. Approach: This solar bubble pump was part of an ammonia/water/helium (NH3/H2O/He absorption-diffusion cooling system. Results: An empirical model was developed on the basis of momentum, mass, material equations and energy balances. The mathematical model was solved using the simulation tool “Engineering Equation Solver (EES”. Conclusion/Recommendations: Using metrological data from Gabes (Tunisia various parameters were geometrically optimized for maximum bubble pump efficiency which was best for a bubble pump tube diameter of 6 mm, a tube length of 1.5 m, an inclination to the horizontal between 30 and 50° of the solar flat plate collector and a submergence ratio between 0.2 and 0.3.

  16. A Class of Parameter Estimation Methods for Nonlinear Muskingum Model Using Hybrid Invasive Weed Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Aijia Ouyang

    2015-01-01

    Full Text Available Nonlinear Muskingum models are important tools in hydrological forecasting. In this paper, we have come up with a class of new discretization schemes including a parameter θ to approximate the nonlinear Muskingum model based on general trapezoid formulas. The accuracy of these schemes is second order, if θ≠1/3, but interestingly when θ=1/3, the accuracy of the presented scheme gets improved to third order. Then, the present schemes are transformed into an unconstrained optimization problem which can be solved by a hybrid invasive weed optimization (HIWO algorithm. Finally, a numerical example is provided to illustrate the effectiveness of the present methods. The numerical results substantiate the fact that the presented methods have better precision in estimating the parameters of nonlinear Muskingum models.

  17. Optimization of Cylindrical Grinding Process Parameters on C40E Steel Using Taguchi Technique

    Directory of Open Access Journals (Sweden)

    Naresh Kumar

    2015-01-01

    Full Text Available Surface finish and dimensional accuracy play a vital role in the today’s engineering industry. There are several methods used to achieve good surface finish like burnishing, honing and lapping, and grinding. Grinding is one of these ways that improves the surface finish and dimensional accuracy simultaneously. C40E steel has good industrial application in manufacturing of shafts, axles, spindles, studs, etc. In the present work the cylindrical grinding of C40E steel is done for the optimization of grinding process parameters. During this experimental work input process parameters i.e. speed, feed, depth of cut is optimized using Taguchi L9 orthogonal array. Analysis of variance (ANOVA concluded that surface roughness is minimum at the 210 rpm, 0.11mm/rev feed, and 0.04mm depth of penetration.

  18. Performance Evaluation and Parameter Optimization of SoftCast Wireless Video Broadcast

    Directory of Open Access Journals (Sweden)

    Dongxue Yang

    2015-08-01

    Full Text Available Wireless video broadcast plays an imp ortant role in multimedia communication with the emergence of mobile video applications. However, conventional video broadcast designs suffer from a cliff effect due to separated source and channel encoding. The newly prop osed SoftCast scheme employs a cross-layer design, whose reconstructed video quality is prop ortional to the channel condition. In this pap er, we provide the p erformance evaluation and the parameter optimization of the SoftCast system. Optimization principles on parameter selection are suggested to obtain a b etter video quality, o ccupy less bandwidth and/or utilize lower complexity. In addition, we compare SoftCast with H.264 in the LTE EPA scenario. The simulation results show that SoftCast provides a b etter p erformance in the scalability to channel conditions and the robustness to packet losses.

  19. High strength Al–Al2O3p composites: Optimization of extrusion parameters

    DEFF Research Database (Denmark)

    Luan, B.F.; Hansen, Niels; Godfrey, A.;

    2011-01-01

    Composite aluminium alloys reinforced with Al2O3p particles have been produced by squeeze casting followed by hot extrusion and a precipitation hardening treatment. Good mechanical properties can be achieved, and in this paper we describe an optimization of the key processing parameters. The...... investigation of their mechanical properties and microstructure, as well as on the surface quality of the extruded samples. The evaluation shows that material with good strength, though with limited ductility, can be reliably obtained using a production route of squeeze casting, followed by hot extrusion and a...... precipitation hardening treatment. For the extrusion step optimized processing parameters have been determined as: (i) extrusion temperature=500°C–560°C; (ii) extrusion rate=5mm/s; (iii) extrusion ratio=10:1....

  20. Experimental and numerical analysis for optimal design parameters of a falling film evaporator

    Indian Academy of Sciences (India)

    RAJNEESH KAUSHAL; RAJ KUMAR; GAURAV VATS

    2016-06-01

    Present study exhibits an experimental examination of mass transfer coefficient and evaporative effectiveness of a falling film evaporator. Further, a statistical replica is extended in order to have optimal controlling parameters viz. non-dimensional enthalpy potential, film Reynolds number of cooling water, Reynolds number of air and relative humidity of up-streaming air. The models not only give an optimal solution but also help in establishing a correlation among controlling parameters. In this context, response surface methodology is employed by aid of design of experiment approach. Later, the response surface curves are studied using ANOVA. Finally, the relations established are confirmed experimentally to validate the models. The relations thus established are beneficent in furtherance of designing evaporators. Additionally, the presentstudy is among the first attempts to reveal the effect of humidity on the performance of falling film evaporator.

  1. Parameters Optimization of Curtain Grouting Reinforcement Cycle in Yonglian Tunnel and Its Application

    OpenAIRE

    Qingsong Zhang; Peng Li; Gang Wang; Shucai Li; Xiao Zhang; Qianqing Zhang; Qian Wang; Jianguo Liu

    2015-01-01

    For practical purposes, the curtain grouting method is an effective method to treat geological disasters and can be used to improve the strength and permeability resistance of surrounding rock. Selection of the optimal parameters of grouting reinforcement cycle especially reinforcement cycle thickness is one of the most interesting areas of research in curtain grouting designs. Based on the fluid-structure interaction theory and orthogonal analysis method, the influence of reinforcement cycle...

  2. HIGH HYDROSTATIC PRESSURE EXTRACTION OF ANTIOXIDANTS FROM MORINDA CITRIFOLIA FRUIT – PROCESS PARAMETERS OPTIMIZATION

    OpenAIRE

    PRAVEEN KUMAR; CHRIS CHU; DUDUKU KRISHNAIAH; AWANG BONO

    2006-01-01

    A modified version of high hydrostatic pressure extraction has been performed for extraction of antioxidants from M. citrifolia fruit at 5, 15, 25 bar and temperature 30° to 70°C for time duration 1, 2, 4 and 6 hours. The antioxidant activity of the extracts was determined by di-phenylpicrylhydrazyl radical scavenging method. The process parameters were optimized for antioxidant activity by central composite design method of response surface methodology using the statistical package, design e...

  3. Adaptive Importance Sampling for Performance Evaluation and Parameter Optimization of Communication Systems

    OpenAIRE

    Remondo, David; Srinivasan, Rajan; Nicola, Victor F.; Etten, van, WC Wim; Tattje, Henk E.P.

    2000-01-01

    We present new adaptive importance sampling techniques based on stochastic Newton recursions. Their applicability to the performance evaluation of communication systems is studied. Besides bit-error rate (BER) estimation, the techniques are used for system parameter optimization. Two system models that are analytically tractable are employed to demonstrate the validity of the techniques. As an application to situations that are analytically intractable and numerically intensive, the influence...

  4. The Distribution Population-based Genetic Algorithm for Parameter Optimization PID Controller

    Institute of Scientific and Technical Information of China (English)

    CHENQing-Geng; WANGNing; HUANGShao-Feng

    2005-01-01

    Enlightened by distribution of creatures in natural ecology environment, the distribution population-based genetic algorithm (DPGA) is presented in this paper. The searching capability of the algorithm is improved by competition between distribution populations to reduce the search zone.This method is applied to design of optimal parameters of PID controllers with examples, and the simulation results show that satisfactory performances are obtained.

  5. A Genetic Algorithm Approach to Optimize Parameters in Infrared Guidance System

    Institute of Scientific and Technical Information of China (English)

    周德俊

    2001-01-01

    In the infrared guidance system, the gray level threshold is key for target recognition. After thresholding, a target in the binary image is distinguished from the complex background by three recognition features. Using a genetic algorithm, this paper seeks to find the optimal parameters varied with different sub-images to compute the adaptive segmentation threshold. The experimental results reveal that the GA paradigm is an efficient and effective method of search.

  6. Spot Welding Parameter Optimization to Improve Weld Characteristics for Dissimilar Metals

    OpenAIRE

    Aravinthan Arumugam; MohdAmizi Nor

    2015-01-01

    Abstract Resistance spot welding is a process which is widely used in the automotive industry to join steel parts of various thicknesses and types. The current practice in the automotive industry in determining the welding schedule which will be used in the welding process is based on welding table or experiences. This however may not be the optimum welding schedule that will give the best spot weld quality. This work concentrates on the parameter optimization when spot welding steels with di...

  7. Optimization Of Blasting Design Parameters On Open Pit Bench A Case Study Of Nchanga Open Pits

    OpenAIRE

    Victor Mwango Bowa

    2015-01-01

    Abstract In hard rock mining blasting is the most productive excavation technique applied to fragment insitu rock to the required size for efficient loading and crushing. In order to blast the insitu rock to the desired fragment size blast design parameter such as bench height hole diameter spacing burden hole length bottom charge specific charge and rock factor are considered. The research was carried out as a practical method on Nchanga Open Pits NOP ore Bench to optimize the blasting desig...

  8. Optimization of the Effective Parameters on Hydraulic Fracturing Designing in an Iranian Sand Stone Reservoir

    OpenAIRE

    Reza Masoomi; Iniko Bassey; Dolgow Sergie Viktorovich

    2015-01-01

    Hydraulic fracturing operation is one of the key technologies in order to stimulate oil and gas wells in sand stone reservoirs. Field data relating to the hydraulic fracturing operation are mostly available as pressure-time curves. The optimization of the hydraulic fracturing parameters is not possible with only this information. So the designing and controlling the development process of hydraulic fracturing are possible only with rely on complex mathematical and numerical models. The aim of...

  9. Structural Damage Detection Based on Modal Parameters Using Continuous Ant Colony Optimization

    OpenAIRE

    Aditi Majumdar; Bharadwaj Nanda; Dipak Kumar Maiti; Damodar Maity

    2014-01-01

    A method is presented to detect and quantify structural damages from changes in modal parameters (such as natural frequencies and mode shapes). An inverse problem is formulated to minimize the objective function, defined in terms of discrepancy between the vibration data identified by modal testing and those computed from analytical model, which then solved to locate and assess the structural damage using continuous ant colony optimization algorithm. The damage is formulated as stiffness redu...

  10. Characterization of PV panel and global optimization of its model parameters using genetic algorithm

    International Nuclear Information System (INIS)

    Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules

  11. Response of a quarter car model with optimal magnetorheological damper parameters

    Science.gov (United States)

    Prabakar, R. S.; Sujatha, C.; Narayanan, S.

    2013-04-01

    In this paper, the control of the stationary response of a quarter car model to random road excitation with a Magnetorheological (MR) damper as a semi-active suspension device is considered. The MR damper is a hypothetical analytical damper whose parameters are determined optimally using a multi-objective optimization technique Non-dominated Sorting Genetic Algorithm II (NSGA II). The hysteretic behaviour of the MR damper is characterized using Bingham and modified Bouc-Wen models. The multi-objective optimization problem is solved by minimizing the difference between the root mean square (rms) sprung mass acceleration, suspension stroke and the road holding responses of the quarter car model with the MR damper and those of the active suspension system based on linear quadratic regulator (LQR) control with the constraint that the MR damper control force lies between ±5 percent of the LQR control force. It is observed that the MR damper suspension systems with optimal parameters perform an order of magnitude better than the passive suspension and perform as well as active suspensions with limited state feedback and closer to the performance of fully active suspensions.

  12. Parameters Optimization of Plasma Hardening Process Using Genetic Algorithm and Neural Network

    Institute of Scientific and Technical Information of China (English)

    LIU Gu; WANG Liu-ying; CHEN Gui-ming; HUA Shao-chun

    2011-01-01

    Plasma surface hardening process was performed to improve the performance of the AISI 1045 carbon steel.Experiments were carried out to characterize the hardening qualities.A predicting and optimizing model using genetic algorithm-back propagation neural network(GA-BP) was developed based on the experimental results.The non-linear relationship between properties of hardening layers and process parameters was established.The results show that the GA-BP predicting model is reliable since prediction results are in rather good agreement with measured results.The optimal properties of the hardened layer were deduced from GA.And through multi optimizations,the optimum comprehensive performances of the hardened layer were as follows:plasma arc current is 90 A,hardening speed is 2.2 m/min,plasma gas flow rate is 6.0 L/min and hardening distance is 4.3 mm.It concludes that GA-BP mode developed in this study provides a promising method for plasma hardening parameters prediction and optimization.

  13. Control parameter optimal tuning method based on annealing-genetic algorithm for complex electromechanical system

    Institute of Scientific and Technical Information of China (English)

    贺建军; 喻寿益; 钟掘

    2003-01-01

    A new searching algorithm named the annealing-genetic algorithm(AGA) was proposed by skillfully merging GA with SAA. It draws on merits of both GA and SAA ,and offsets their shortcomings. The difference from GA is that AGA takes objective function as adaptability function directly, so it cuts down some unnecessary time expense because of float-point calculation of function conversion. The difference from SAA is that AGA need not execute a very long Markov chain iteration at each point of temperature, so it speeds up the convergence of solution and makes no assumption on the search space,so it is simple and easy to be implemented. It can be applied to a wide class of problems. The optimizing principle and the implementing steps of AGA were expounded. The example of the parameter optimization of a typical complex electromechanical system named temper mill shows that AGA is effective and superior to the conventional GA and SAA. The control system of temper mill optimized by AGA has the optimal performance in the adjustable ranges of its parameters.

  14. Evaluation of Anaerobic Biofilm Reactor Kinetic Parameters Using Ant Colony Optimization.

    Science.gov (United States)

    Satya, Eswari Jujjavarapu; Venkateswarlu, Chimmiri

    2013-09-01

    Fixed bed reactors with naturally attached biofilms are increasingly used for anaerobic treatment of industry wastewaters due their effective treatment performance. The complex nature of biological reactions in biofilm processes often poses difficulty in analyzing them experimentally, and mathematical models could be very useful for their design and analysis. However, effective application of biofilm reactor models to practical problems suffers due to the lack of knowledge of accurate kinetic models and uncertainty in model parameters. In this work, an inverse modeling approach based on ant colony optimization is proposed and applied to estimate the kinetic and film thickness model parameters of wastewater treatment process in an anaerobic fixed bed biofilm reactor. Experimental data of pharmaceutical industry wastewater treatment process are used to determine the model parameters as a consequence of the solution of the rigorous mathematical models of the process. Results were evaluated for different modeling configurations derived from the combination of mathematical models, kinetic expressions, and optimization algorithms. Analysis of results showed that the two-dimensional mathematical model with Haldane kinetics better represents the pharmaceutical wastewater treatment in the biofilm reactor. The mathematical and kinetic modeling of this work forms a useful basis for the design and optimization of industry wastewater treating biofilm reactors. PMID:24065871

  15. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Naveed Khan

    2016-10-01

    Full Text Available In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA algorithm by using a genetic algorithm (GA to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%.

  16. Mechanism and optimization of fuel injection parameters on combustion noise of DI diesel engine

    Institute of Scientific and Technical Information of China (English)

    张庆辉; 郝志勇; 郑旭; 杨文英; 毛杰

    2016-01-01

    Combustion noise takes large proportion in diesel engine noise and the studies of its influence factors play an important role in noise reduction. Engine noise and cylinder pressure measurement experiments were carried out. And the improved attenuation curves were obtained, by which the engine noise was predicted. The effect of fuel injection parameters in combustion noise was investigated during the combustion process. At last, the method combining single variable optimization and multivariate combination was introduced to online optimize the combustion noise. The results show that injection parameters can affect the cylinder pressure rise rate and heat release rate, and consequently affect the cylinder pressure load and pressure oscillation to influence the combustion noise. Among these parameters, main injection advance angle has the greatest influence on the combustion noise, while the pilot injection interval time takes the second place, and the pilot injection quantity is of minimal impact. After the optimal design of the combustion noise, the average sound pressure level of the engine is distinctly reduced by 1.0 dB(A) generally. Meanwhile, the power, emission and economy performances are ensured.

  17. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    Science.gov (United States)

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-01-01

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm. PMID:26121614

  18. Optimization of parameter settings in cine-MR imaging for diagnosis of swallowing.

    Science.gov (United States)

    Ohkubo, Mai; Higaki, Takuo; Nishikawa, Keiichi; Otonari-Yamamoto, Mika; Sugiyama, Tetsuya; Ishida, Ryo; Wako, Mamoru; Sano, Tsukasa

    2014-01-01

    Videofluorography is frequently used to evaluate swallowing and is considered the "gold standard" among imaging modalities. This modality, however, has several disadvantages, including radiation exposure and limitations in the detection of soft tissues. Conversely, magnetic resonance imaging (MRI) offers excellent contrast resolution in soft tissue without radiation exposure. A major drawback of MRI in evaluating swallowing, however, is that temporal resolution is poor. The aim of this study was to investigate a new cine-MRI modality. Imaging parameters were optimized and the efficacy of this new technique is discussed. Three techniques for speeding up MRI were combined: true fast imaging with steady state precession, generalized auto-calibrating partially parallel acquisition, and key-hole imaging. The effects of the receiver coils used, receiving bandwidth, slice thickness, and flip angle on each image were determined. The optimal imaging parameters obtained comprised a reduction factor of 2, receiving bandwidth of 1,000 Hz/pixel (repetition time of 151.7 milliseconds and echo time of 1.4 milliseconds), flip angle of 50°, and slice thickness of 6 mm. Neck and spine coils were used. Under these conditions, the new cine-MR imaging technique investigated showed a temporal resolution of 0.1 sec/slice (10 frames/sec). Even with optimized parameter settings, this technique did not allow a true temporal resolution of 30 frames/sec by a large margin. Motion artifacts persisted. Further study is needed on how to speed up this technique. PMID:25212558

  19. Application of HGSO to security based optimal placement and parameter setting of UPFC

    International Nuclear Information System (INIS)

    Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies

  20. μMORE: A microfluidic magnetic oscillation reactor for accelerated parameter optimization in biocatalysis.

    Science.gov (United States)

    Jussen, Daniel; Soltner, Helmut; Stute, Birgit; Wiechert, Wolfgang; von Lieres, Eric; Pohl, Martina

    2016-08-10

    Enzymatic parameter determination is an essential step in biocatalytic process development. Therefore higher throughput in miniaturized devices is urgently needed. An ideal microfluidic device should combine easy immobilization and retention of a minimal amount of biocatalyst with a well-mixed reaction volume. Together, all criteria are hardly met by current tools. Here we describe a microfluidic reactor (μMORE) which employs magnetic particles for both enzyme immobilization and efficient mixing using two permanent magnets placed in rotating cylinders next to the a glass chip reactor. The chip geometry and agitation speed was optimized by investigation of the mixing and retention characteristics using simulation and dye distribution analysis. Subsequently, the μMORE was successfully applied to determine critical biocatalytic process parameters in a parallelized manner for the carboligation of benzaldehyde and acetaldehyde to (S)-2-hydroxy-1-phenylpropan-1-one with less than 5μg of benzoylformate decarboxylase from Pseudomonas putida immobilized on magnetic beads. Here, one run of the device in six parallelized glass reactors took only 2-3h for an immobilized enzyme with very low activity (∼2U/mg). The optimized parameter set was finally tested in a 10mL enzyme membrane reactor, demonstrating that the μMORE provides a solid data base for biocatalytic process optimization. PMID:27288595

  1. Waste Characterization Using Gamma Ray Spectrometry with Automated Efficiency Optimization - 13404

    Energy Technology Data Exchange (ETDEWEB)

    Bosko, A.; Venkataraman, R.; Bronson, F.L.; Ilie, G.; Russ, W.R. [Canberra Industries, 800 Research Parkway, Meriden, CT 06450 (United States)

    2013-07-01

    Gamma ray spectrometry using High Purity Germanium (HPGe) detectors is commonly employed in assaying radioactive waste streams from a variety of sources: nuclear power plants, Department of Energy (DOE) laboratories, medical facilities, decontamination and decommissioning activities etc. The radioactive material is typically packaged in boxes or drums (for e.g. B-25 boxes or 208 liter drums) and assayed to identify and quantify radionuclides. Depending on the origin of the waste stream, the radionuclides could be special nuclear materials (SNM), fission products, or activation products. Efficiency calibration of the measurement geometry is a critical step in the achieving accurate quantification of radionuclide content. Due to the large size of the waste items, it is impractical and expensive to manufacture gamma ray standard sources for performing a measurement based calibration. For well over a decade, mathematical efficiency methods such as those in Canberra's In Situ Object Counting System (ISOCS) have been successfully employed in the efficiency calibration of gamma based waste assay systems. In the traditional ISOCS based calibrations, the user provides input data such as the dimensions of the waste item, the average density and fill height of the matrix, and matrix composition. As in measurement based calibrations, the user typically defines a homogeneous matrix with a uniform distribution of radioactivity. Actual waste containers can be quite nonuniform, however. Such simplifying assumptions in the efficiency calibration could lead to a large Total Measurement Uncertainty (TMU), thus limiting the amount of waste that can be disposed of as intermediate or low activity level waste. To improve the accuracy of radionuclide quantification, and reduce the TMU, Canberra has developed the capability to optimize the efficiency calibration using the ISOCS method. The optimization is based on benchmarking the efficiency shape and magnitude to the data available

  2. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    Science.gov (United States)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-10-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the "threat" set of spectra

  3. Estimation of musculotendon parameters for scaled and subject specific musculoskeletal models using an optimization technique.

    Science.gov (United States)

    Modenese, Luca; Ceseracciu, Elena; Reggiani, Monica; Lloyd, David G

    2016-01-25

    A challenging aspect of subject specific musculoskeletal modeling is the estimation of muscle parameters, especially optimal fiber length and tendon slack length. In this study, the method for scaling musculotendon parameters published by Winby et al. (2008), J. Biomech. 41, 1682-1688, has been reformulated, generalized and applied to two cases of practical interest: 1) the adjustment of muscle parameters in the entire lower limb following linear scaling of a generic model and 2) their estimation "from scratch" in a subject specific model of the hip joint created from medical images. In the first case, the procedure maintained the muscles׳ operating range between models with mean errors below 2.3% of the reference model normalized fiber length value. In the second case, a subject specific model of the hip joint was created using segmented bone geometries and muscle volumes publicly available for a cadaveric specimen from the Living Human Digital Library (LHDL). Estimated optimal fiber lengths were found to be consistent with those of a previously published dataset for all 27 considered muscle bundles except gracilis. However, computed tendon slack lengths differed from tendon lengths measured in the LHDL cadaver, suggesting that tendon slack length should be determined via optimization in subject-specific applications. Overall, the presented methodology could adjust the parameters of a scaled model and enabled the estimation of muscle parameters in newly created subject specific models. All data used in the analyses are of public domain and a tool implementing the algorithm is available at https://simtk.org/home/opt_muscle_par. PMID:26776930

  4. Parameters Identification of Fluxgate Magnetic Core Adopting the Biogeography-Based Optimization Algorithm.

    Science.gov (United States)

    Jiang, Wenjuan; Shi, Yunbo; Zhao, Wenjie; Wang, Xiangxin

    2016-01-01

    The main part of the magnetic fluxgate sensor is the magnetic core, the hysteresis characteristic of which affects the performance of the sensor. When the fluxgate sensors are modelled for design purposes, an accurate model of hysteresis characteristic of the cores is necessary to achieve good agreement between modelled and experimental data. The Jiles-Atherton model is simple and can reflect the hysteresis properties of the magnetic material precisely, which makes it widely used in hysteresis modelling and simulation of ferromagnetic materials. However, in practice, it is difficult to determine the parameters accurately owing to the sensitivity of the parameters. In this paper, the Biogeography-Based Optimization (BBO) algorithm is applied to identify the Jiles-Atherton model parameters. To enhance the performances of the BBO algorithm such as global search capability, search accuracy and convergence rate, an improved Biogeography-Based Optimization (IBBO) algorithm is put forward by using Arnold map and mutation strategy of Differential Evolution (DE) algorithm. Simulation results show that IBBO algorithm is superior to Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Differential Evolution algorithm and BBO algorithm in identification accuracy and convergence rate. The IBBO algorithm is applied to identify Jiles-Atherton model parameters of selected permalloy. The simulation hysteresis loop is in high agreement with experimental data. Using permalloy as core of fluxgate probe, the simulation output is consistent with experimental output. The IBBO algorithm can identify the parameters of Jiles-Atherton model accurately, which provides a basis for the precise analysis and design of instruments and equipment with magnetic core. PMID:27347974

  5. Long-term evaluation of TiO2-based 68Ge/68Ga generators and optimized automation of [68Ga]DOTATOC radiosynthesis.

    Science.gov (United States)

    Lin, Mai; Ranganathan, David; Mori, Tetsuya; Hagooly, Aviv; Rossin, Raffaella; Welch, Michael J; Lapi, Suzanne E

    2012-10-01

    Interest in using (68)Ga is rapidly increasing for clinical PET applications due to its favorable imaging characteristics and increased accessibility. The focus of this study was to provide our long-term evaluations of the two TiO(2)-based (68)Ge/(68)Ga generators and develop an optimized automation strategy to synthesize [(68)Ga]DOTATOC by using HEPES as a buffer system. This data will be useful in standardizing the evaluation of (68)Ge/(68)Ga generators and automation strategies to comply with regulatory issues for clinical use. PMID:22897970

  6. Optimized Energy Management of a Single-House Residential Micro-Grid With Automated Demand Response

    DEFF Research Database (Denmark)

    Anvari-Moghaddam, Amjad; Monsef, Hassan; Rahimi-Kian, Ashkan;

    2015-01-01

    In this paper, an intelligent multi-objective energy management system (MOEMS) is proposed for applications in residential LVAC micro-grids where households are equipped with smart appliances, such as washing machine, dishwasher, tumble dryer and electric heating and they have the capability to t...... to reduce residential energy use and improve the user’s satisfaction degree by optimal management of demand/generation sides....... to take part in demand response (DR) programs. The superior performance and efficiency of the proposed system is studied through several scenarios and case studies and validated in comparison with the conventional models. The simulation results demonstrate that the proposed MOEMS has the capability...

  7. Optimization of Process Parameters of Stamping Forming of the Automotive Lower Floor Board

    Directory of Open Access Journals (Sweden)

    Guoying Ma

    2014-01-01

    Full Text Available There are many process parameters which have great effect on the forming quality of parts during automobile panel stamping forming process. This paper took automotive lower floor board as the research object; the forming process was analyzed by finite element simulation using Dynaform. The influences of four main process parameters including BHF (blank holder force, die corner radius, friction coefficient, and die clearance on the maximum thinning rate and the maximum thickening rate were researched based on orthogonal experiment. The results show that the influences of each value of various factors on the target are not identical. On this basis, the optimization of the four parameters was carried out, and the high quality product was obtained and the maximum thinning rate and maximum thickening rate were effectively controlled. The results also show that the simulation analysis provides the basis for the optimization of the forming process parameters, and it can greatly shorten the die manufacturing cycles, reduce the production costs, and improve the production efficiency.

  8. Fast and Efficient Black Box Optimization Using the Parameter-less Population Pyramid.

    Science.gov (United States)

    Goldman, B W; Punch, W F

    2015-01-01

    The parameter-less population pyramid (P3) is a recently introduced method for performing evolutionary optimization without requiring any user-specified parameters. P3's primary innovation is to replace the generational model with a pyramid of multiple populations that are iteratively created and expanded. In combination with local search and advanced crossover, P3 scales to problem difficulty, exploiting previously learned information before adding more diversity. Across seven problems, each tested using on average 18 problem sizes, P3 outperformed all five advanced comparison algorithms. This improvement includes requiring fewer evaluations to find the global optimum and better fitness when using the same number of evaluations. Using both algorithm analysis and comparison, we find P3's effectiveness is due to its ability to properly maintain, add, and exploit diversity. Unlike the best comparison algorithms, P3 was able to achieve this quality without any problem-specific tuning. Thus, unlike previous parameter-less methods, P3 does not sacrifice quality for applicability. Therefore we conclude that P3 is an efficient, general, parameter-less approach to black box optimization which is more effective than existing state-of-the-art techniques.

  9. Multiobjective Optimization of Turning Cutting Parameters for J-Steel Material

    Directory of Open Access Journals (Sweden)

    Adel T. Abbas

    2016-01-01

    Full Text Available This paper presents a multiobjective optimization study of cutting parameters in turning operation for a heat-treated alloy steel material (J-Steel with Vickers hardness in the range of HV 365–395 using uncoated, unlubricated Tungsten-Carbide tools. The primary aim is to identify proper settings of the cutting parameters (cutting speed, feed rate, and depth of cut that lead to reasonable compromises between good surface quality and high material removal rate. Thorough exploration of the range of cutting parameters was conducted via a five-level full-factorial experimental matrix of samples and the Pareto trade-off frontier is identified. The trade-off among the objectives was observed to have a “knee” shape, in which certain settings for the cutting parameters can achieve both good surface quality and high material removal rate within certain limits. However, improving one of the objectives beyond these limits can only happen at the expense of a large compromise in the other objective. An alternative approach for identifying the trade-off frontier was also tested via multiobjective implementation of the Efficient Global Optimization (m-EGO algorithm. The m-EGO algorithm was successful in identifying two points within the good range of the trade-off frontier with 36% fewer experimental samples.

  10. Optimization of burnishing parameters and determination of select surface characteristics in engineering materials

    Indian Academy of Sciences (India)

    P Ravindra Babu; K Ankamma; T Siva Prasad; A V S Raju; N Eswara Prasad

    2012-08-01

    The present study is aimed at filling the gaps in scientific understanding of the burnishing process, and also to aid and arrive at technological solutions for the surface modifications based on burnishing of some of the commonly employed engineering materials. The effects of various burnishing parameters on the surface characteristics, surface microstructure, micro hardness are evaluated, reported and discussed in the case of EN Series steels (EN 8, EN 24 and EN 31), Aluminum alloy (AA6061) and Alpha-beta brass. The burnishing parameters considered for studies principally are burnishing speed, burnishing force, burnishing feed and number of passes. Taguchi technique is employed in the present investigation to identify the most influencing parameters on surface roughness. Effort is also made to identify the optimal burnishing parameters and the factors for scientific basis of such optimization. Finally, a brief attempt is made to construct the Burnishing maps with respect to strength level (in this case, average micro hardness of unburnished material).

  11. Statistical optimization of process parameters on biohydrogen production from glucose by Clostridium sp. Fanp2.

    Science.gov (United States)

    Pan, C M; Fan, Y T; Xing, Y; Hou, H W; Zhang, M L

    2008-05-01

    Statistically based experimental designs were applied to optimizing process parameters for hydrogen production from glucose by Clostridium sp. Fanp2 which was isolated from effluent sludge of anaerobic hydrogen-producing bioreactor. The important factors influencing hydrogen production, which identified by initial screening method of Plackett-Burman, were glucose, phosphate buffer and vitamin solution. The path of steepest ascent was undertaken to approach the optimal region of the three significant factors. Box-Behnken design and response surface analysis were adopted to further investigate the mutual interaction between the variables and identify optimal values that bring maximum hydrogen production. Experimental results showed that glucose, vitamin solution and phosphate buffer concentration all had an individual significant influence on the specific hydrogen production potential (Ps). Simultaneously, glucose and vitamin solution, glucose and phosphate buffer were interdependent. The optimal conditions for the maximal Ps were: glucose 23.75 g/l, phosphate buffer 0.159 M and vitamin solution 13.3 ml/l. Using this statistical optimization method, the hydrogen production from glucose was increased from 2248.5 to 4165.9 ml H2/l.

  12. Thermodynamic Optimization of the Operative Parameters for the Heat Recovery in Combined Power Plants

    Directory of Open Access Journals (Sweden)

    Alessandro Franco

    2001-03-01

    Full Text Available

    For the combined power plants, the optimization of the heat recovery steam generator (HRSG is of particular interest in order to improve the efficiency of the heat recovery from turbine exhaust gas and to maximize the power production in the steam cycle. The thermodynamic optimization is the first step of a power plant design optimization process. The aim of this paper is to provide thermodynamic tools for the optimal selection of the operative parameters of the HRSG, starting from which a detailed optimization of its design variables can be carried out. For the thermodynamic analysis, the selected objective is the minimization of thermal exergy losses, taking into account only the irreversibility due to the temperature difference between the hot and cold streams. Various HRSG configurations have been analyzed, from the simpler, a single evaporator to the common configuration of two-pressure steam generator with five different sections.

    •  This paper was presented at the ECOS'00 Conference in Enschede, July 5-7, 2000

  13. The Use of Response Surface Methodology to Optimize Parameter Adjustments in CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Shao-Hsien Chen

    2014-01-01

    Full Text Available This paper mainly covers a research intended to improve the circular accuracy of CNC machine tools and the adjustment and analysis of the main controller parameters applied to improve accuracy. In this study, controller analysis software was used to detect the adjustment status of the servo parameters of the feed axis. According to the FANUC parameter manual, the parameter address, frequency, response measurements, and the one-fourth corner acceleration and deceleration measurements of the machine tools were adjusted. The experimental design (DOE was adopted in this study for taking circular measurements and engaging in the planning and selection of important parameter data. The Minitab R15 software was adopted to predict the experimental data analysis, while the seminormal probability map, Plato, and analysis of variance (ANOVA were adopted to determine the impacts of the significant parameter factors and the interactions among them. Additionally, based on the response surface map and contour plot, the optimal values were obtained. In addition, comparison and verification were conducted through the Taguchi method, regression analysis to improved machining accuracy and efficiency. The unadjusted error was 7.8 μm; through the regression analysis method, the error was 5.8 μm and through the Taguchi analysis method, the error was 6.4 μm.

  14. Optimization Of Blasting Design Parameters On Open Pit Bench A Case Study Of Nchanga Open Pits

    Directory of Open Access Journals (Sweden)

    Victor Mwango Bowa

    2015-08-01

    Full Text Available Abstract In hard rock mining blasting is the most productive excavation technique applied to fragment insitu rock to the required size for efficient loading and crushing. In order to blast the insitu rock to the desired fragment size blast design parameter such as bench height hole diameter spacing burden hole length bottom charge specific charge and rock factor are considered. The research was carried out as a practical method on Nchanga Open Pits NOP ore Bench to optimize the blasting design parameters that can yield the required fragmentation size thereby reducing the shovel loading times and maximizing efficiency of the subsequent mining unit operations such as hauling and crushing. Fragmentation characteristics such as the mean fragment size were measured by means of a digital measuring tape and predicated using the Kuznetsov equation and rock factor value of ore bench was calculated using Lilly 1986 equations by means of rock characteristics. Traditional blasting design parameters were acquired for NOP and modified using Langerfors and Sharma P.A approaches. Several blast operations were conducted using both traditional and modified blasting design parameters on the same ore bench with the same geological conditions. Loading times of the shovel and fragment sizes were obtained after the blasts from ore bench where both the traditional and modified blasting design parameters were applied. Results show that mean fragment size and loading times were reduced from 51cm and 12minutes to 22cm and 3minutes where traditional and modified blasting design parameters were applied respectively.

  15. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    Science.gov (United States)

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system. PMID:27631789

  16. Optimal part and module selection for synthetic gene circuit design automation.

    Science.gov (United States)

    Huynh, Linh; Tagkopoulos, Ilias

    2014-08-15

    An integral challenge in synthetic circuit design is the selection of optimal parts to populate a given circuit topology, so that the resulting circuit behavior best approximates the desired one. In some cases, it is also possible to reuse multipart constructs or modules that have been already built and experimentally characterized. Efficient part and module selection algorithms are essential to systematically search the solution space, and their significance will only increase in the following years due to the projected explosion in part libraries and circuit complexity. Here, we address this problem by introducing a structured abstraction methodology and a dynamic programming-based algorithm that guaranties optimal part selection. In addition, we provide three extensions that are based on symmetry check, information look-ahead and branch-and-bound techniques, to reduce the running time and space requirements. We have evaluated the proposed methodology with a benchmark of 11 circuits, a database of 73 parts and 304 experimentally constructed modules with encouraging results. This work represents a fundamental departure from traditional heuristic-based methods for part and module selection and is a step toward maximizing efficiency in synthetic circuit design and construction. PMID:24933033

  17. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    Science.gov (United States)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  18. Optimization of tribological parameters in abrasive wear mode of carbon-epoxy hybrid composites

    International Nuclear Information System (INIS)

    Highlights: • Optimization of factors affecting abrasive wear of hybrid composite. • Experimental studies integrated with Taguchi based grey analysis and ANOVA. • Abrasive wear resistance improved with the addition of filler. • Wear rate depends on filler loading, grit of abrasive paper and type of filler. - Abstract: Abrasive wear performance of fabric reinforced composites filled with functional fillers is influenced by the properties of the constituents. This work is focused on identifying the factors such as filler type, filler loading, grit size of SiC paper, normal applied load and sliding distance on two-body abrasive wear behaviour of the hybrid composites. Abrasive wear tests were carried on carbon fabric reinforced epoxy composite (C-E) filled with filler alumina (Al2O3) and molybdenum disulphide (MoS2) separately in different proportions, using pin-on-disc apparatus. The experiments were planned according to Taguchi L18 orthogonal array by considering five factors, one at two levels and the remaining at three levels, affecting the abrasion process. Grey relational analysis (GRA) was employed to optimize the tribological parameters having multiple-response. Analysis of variance (ANOVA) was employed to determine the significance of factors influencing wear. Also, the comparative specific wear rates of all the composites under dry sliding and two-body abrasive wear were discussed. The analysis showed that the filler loading, grit size and filler type are the most significant factors in controlling the specific wear rate of the C-E composite. Optimal combination of the process parameters for multi performance characteristics of the composite under study is the set with filler type as MoS2, filler loading of 10 wt.%, grit size 320, load of 15 N and sliding distance of 30 m. Further, the optimal parameter setting for minimum specific wear rate, coefficient of friction and maximum hardness were corroborated with the help of scanning electron micrographs

  19. Optimization of processing parameters for the preparation of phytosterol microemulsions by the solvent displacement method.

    Science.gov (United States)

    Leong, Wai Fun; Che Man, Yaakob B; Lai, Oi Ming; Long, Kamariah; Misran, Misni; Tan, Chin Ping

    2009-09-23

    The purpose of this study was to optimize the parameters involved in the production of water-soluble phytosterol microemulsions for use in the food industry. In this study, response surface methodology (RSM) was employed to model and optimize four of the processing parameters, namely, the number of cycles of high-pressure homogenization (1-9 cycles), the pressure used for high-pressure homogenization (100-500 bar), the evaporation temperature (30-70 degrees C), and the concentration ratio of microemulsions (1-5). All responses-particle size (PS), polydispersity index (PDI), and percent ethanol residual (%ER)-were well fit by a reduced cubic model obtained by multiple regression after manual elimination. The coefficient of determination (R(2)) and absolute average deviation (AAD) value for PS, PDI, and %ER were 0.9628 and 0.5398%, 0.9953 and 0.7077%, and 0.9989 and 1.0457%, respectively. The optimized processing parameters were 4.88 (approximately 5) homogenization cycles, homogenization pressure of 400 bar, evaporation temperature of 44.5 degrees C, and concentration ratio of microemulsions of 2.34 cycles (approximately 2 cycles) of high-pressure homogenization. The corresponding responses for the optimized preparation condition were a minimal particle size of 328 nm, minimal polydispersity index of 0.159, and <0.1% of ethanol residual. The chi-square test verified the model, whereby the experimental values of PS, PDI, and %ER agreed with the predicted values at a 0.05 level of significance. PMID:19694442

  20. A hybrid systems strategy for automated spacecraft tour design and optimization

    Science.gov (United States)

    Stuart, Jeffrey R.

    As the number of operational spacecraft increases, autonomous operations is rapidly evolving into a critical necessity. Additionally, the capability to rapidly generate baseline trajectories greatly expands the range of options available to analysts as they explore the design space to meet mission demands. Thus, a general strategy is developed, one that is suitable for the construction of flight plans for both Earth-based and interplanetary spacecraft that encounter multiple objects, where these multiple encounters comprise a ``tour''. The proposed scheme is flexible in implementation and can readily be adjusted to a variety of mission architectures. Heuristic algorithms that autonomously generate baseline tour trajectories and, when appropriate, adjust reference solutions in the presence of rapidly changing environments are investigated. Furthermore, relative priorities for ranking the targets are explicitly accommodated during the construction of potential tour sequences. As a consequence, a priori, as well as newly acquired, knowledge concerning the target objects enhances the potential value of the ultimate encounter sequences. A variety of transfer options are incorporated, from rendezvous arcs enabled by low-thrust engines to more conventional impulsive orbit adjustments via chemical propulsion technologies. When advantageous, trajectories are optimized in terms of propellant consumption via a combination of indirect and direct methods; such a combination of available technologies is an example of hybrid optimization. Additionally, elements of hybrid systems theory, i.e., the blending of dynamical states, some discrete and some continuous, are integrated into the high-level tour generation scheme. For a preliminary investigation, this strategy is applied to mission design scenarios for a Sun-Jupiter Trojan asteroid tour as well as orbital debris removal for near-Earth applications.