Automated Optimization of Walking Parameters for the Nao Humanoid Robot
N. Girardi; C. Kooijman; A.J. Wiggers; A. Visser
2013-01-01
This paper describes a framework for optimizing walking parameters for a Nao humanoid robot. In this case an omnidirectional walk is learned. The parameters are learned in simulation with an evolutionary approach. The best performance was obtained for a combination of a low mutation rate and a high
Automated Optimization of Walking Parameters for the Nao Humanoid Robot
Girardi, N.; Kooijman, C.; Wiggers, A.J.; de Visser, A.
2013-01-01
This paper describes a framework for optimizing walking parameters for a Nao humanoid robot. In this case an omnidirectional walk is learned. The parameters are learned in simulation with an evolutionary approach. The best performance was obtained for a combination of a low mutation rate and a high crossover rate.
apsis - Framework for Automated Optimization of Machine Learning Hyper Parameters
Diehl, Frederik; Jauch, Andreas
2015-01-01
The apsis toolkit presented in this paper provides a flexible framework for hyperparameter optimization and includes both random search and a bayesian optimizer. It is implemented in Python and its architecture features adaptability to any desired machine learning code. It can easily be used with common Python ML frameworks such as scikit-learn. Published under the MIT License other researchers are heavily encouraged to check out the code, contribute or raise any suggestions. The code can be ...
Hutter, Frank; Bartz-Beielstein, Thomas; Hoos, Holger H.; Leyton-Brown, Kevin; Murphy, Kevin P.
This work experimentally investigates model-based approaches for optimizing the performance of parameterized randomized algorithms. Such approaches build a response surface model and use this model for finding good parameter settings of the given algorithm. We evaluated two methods from the literature that are based on Gaussian process models: sequential parameter optimization (SPO) (Bartz-Beielstein et al. 2005) and sequential Kriging optimization (SKO) (Huang et al. 2006). SPO performed better "out-of-the-box," whereas SKO was competitive when response values were log transformed. We then investigated key design decisions within the SPO paradigm, characterizing the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and a log-transformed objective function. In a domain for which performance results for other (modelfree) parameter optimization approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance. Finally, we compare this automated parameter tuning approach to an interactive, manual process that makes use of classical
Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way
Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco;
2016-01-01
An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...... of two IGBT modules rated at 1.7 kV / 1 kA and 1.7 kV / 1.4kA....
Verrelst, J.; Rivera, J. P.; Leonenko, G.; Alonso, L.; Moreno, J.
2012-04-01
Radiative transfer (RT) modeling plays a key role for earth observation (EO) because it is needed to design EO instruments and to develop and test inversion algorithms. The inversion of a RT model is considered as a successful approach for the retrieval of biophysical parameters because of being physically-based and generally applicable. However, to the broader community this approach is considered as laborious because of its many processing steps and expert knowledge is required to realize precise model parameterization. We have recently developed a radiative transfer toolbox ARTMO (Automated Radiative Transfer Models Operator) with the purpose of providing in a graphical user interface (GUI) essential models and tools required for terrestrial EO applications such as model inversion. In short, the toolbox allows the user: i) to choose between various plant leaf and canopy RT models (e.g. models from the PROSPECT and SAIL family, FLIGHT), ii) to choose between spectral band settings of various air- and space-borne sensors or defining own sensor settings, iii) to simulate a massive amount of spectra based on a look up table (LUT) approach and storing it in a relational database, iv) to plot spectra of multiple models and compare them with measured spectra, and finally, v) to run model inversion against optical imagery given several cost options and accuracy estimates. In this work ARTMO was used to tackle some well-known problems related to model inversion. According to Hadamard conditions, mathematical models of physical phenomena are mathematically invertible if the solution of the inverse problem to be solved exists, is unique and depends continuously on data. This assumption is not always met because of the large number of unknowns and different strategies have been proposed to overcome this problem. Several of these strategies have been implemented in ARTMO and were here analyzed to optimize the inversion performance. Data came from the SPARC-2003 dataset
Optimization of hidrocyclone work parameters
Golomeova, Mirjana; Krstev, Boris; Golomeov, Blagoj
2003-01-01
The paper presents the procedure of optimization of laboratory hydrocyclone work by the application of dispersion analysis and planning with Greek-Latin square. The application of this method makes possible significant reduction of the number of tests and close optimization of the whole process. Tests were carried out by D-100 mm hydrocyclone. Optimization parameters are as follows: contents of solid in pulp, underflow diameter, overflow diameter and inlet pressure. The influence of optimi...
Multivariate optimization of ILC parameters
Bazarov, Ivan V
2005-01-01
We present results of multiobjective optimization of the International Linear Collider (ILC) which seeks to maximize luminosity at each given total cost of the linac (capital and operating costs of cryomodules, refrigeration and RF). Evolutionary algorithms allow quick exploration of optimal sets of parameters in a complicated system such as ILC in the presence of realistic constraints as well as investigation of various what-if scenarios in potential performance. Among the parameters we varied there were accelerating gradient and Q of the cavities (in a coupled manner following a realistic Q vs. E curve), the number of particles per bunch, the bunch length, number of bunches in the train, etc. We find an optimum which decreases (relative to TDR baseline) the total linac cost by 22 %, capital cost by 25 % at the same luminosity of 3·1038
Optimization-based Method for Automated Road Network Extraction
Xiong, D
2001-09-18
Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.
Optimization-based Method for Automated Road Network Extraction
Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction
Automated global optimization of commercial SAGD operations
The economic optimization of steam assisted gravity drainage (SAGD) operations has been largely conducted through the use of simulations to identify optimal steam use approaches. In this study, the cumulative steam to oil ratio (CSOR) was optimized by altering the steam injection pressure throughout the evolution of the process in a detailed, 3-d reservoir model. A generic Athabasca simulation model was used along with a thermal reservoir simulator which used a corner point grid. A line heater was specified in the grid cells containing the well bores to mimic steam circulation. During heating, the injection and production locations were allowed to produce reservoir fluids from the reservoir to relieve pressure associated with the thermal expansion of oil sand. After steam circulation, the well bores were switched to an SAGD operation. At the producer well the operating constraint imposed a maximum temperature difference between the saturation temperature corresponding to the pressure of the fluids and the temperature in the wellbore equal to 5 degrees C. At the injection well, the steam injection pressure was specified according to the optimizer. A response surface was constructed by fitting the parameter sets and corresponding cost functions to a biquadratic function. After the minimum from the cost function was determined, a new set of parameters was selected to complete the iterations. Results indicated that optimization of SAGD is feasible with complex and detailed reservoir models by using parallel calculations. The general trend determined by the optimization algorithm developed in the research indicated that before the steam chamber contacts the overburden, the operating pressure should be relatively high. After contact is made, the injection pressure should be lowered to reduce heat losses. 17 refs., 1 tab., 5 figs
Applications of Intelligent Evolutionary Algorithms in Optimal Automation System Design
Tung-Kuan Liu; Jyh-Horng Chou
2011-01-01
This paper proposes an intelligent evolutionary algorithm that can be applied in the design of optimal automation systems, and employs a multimodal six-bar mechanism optimization design, job shop production scheduling for the fishing equipment industry, and dynamic real-time production scheduling system design cases to show how the technique developed in this paper is highly effective at resolving optimal automation system design problems. Major breakthroughs in artificial intelligence contin...
Automated Cache Performance Analysis And Optimization
Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-12-23
While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters
Spray automated balancing of rotors - How process parameters influence performance
Smalley, A. J.; Baldwin, R. M.; Fleming, D. P.; Yuhas, J. S.
1989-01-01
This paper addresses the application of spray-automated balancing of rotors, and the influence that various operating parameters will have on balancing performance. Spray-automated balancing uses the fuel-air repetitive explosion process to imbed short, discrete bursts of high velocity, high temperature powder into a rotating part at an angle selected to reduce unbalance of the part. The shortness of the burst, the delay in firing of the gun, the speed of the disk and the variability in speed all influence the accuracy and effectiveness of the automated balancing process. The paper evaluates this influence by developing an analytical framework and supplementing the analysis with empirical data obtained while firing the gun at a rotating disk. Encouraging results are obtained, and it is shown that the process should perform satisfactorily over a wide range of operating parameters. Further experimental results demonstrate the ability of the method to reduce vibration levels induced by mass unbalance in a rotating disk.
DRAM BASED PARAMETER DATABASE OPTIMIZATION
Marcinkevicius, Tadas
2012-01-01
This thesis suggests an improved parameter database implementation for one of Ericsson products. The parameter database is used during the initialization of the system as well as during the later operation. The database size is constantly growing because the parameter database is intended to be used with different hardware configurations. When a new technology platform is released, multiple revisions with additional features and functionalities are later created, resulting in introduction of ...
Evaluation of GCC optimization parameters
Rodrigo D. Escobar
2012-12-01
Full Text Available Compile-time optimization of code can result in significant performance gains. The amount of these gains varies widely depending upon the code being optimized, the hardware being compiled for, the specific performance increase attempted (e.g. speed, throughput, memory utilization, etc. and the used compiler. We used the latest version of the SPEC CPU 2006 benchmark suite to help gain an understanding of possible performance improvements using GCC (GNU Compiler Collection options focusing mainly on speed gains made possible by tuning the compiler with the standard compiler optimization levels as well as a specific compiler option for the hardware processor. We compared the best standardized tuning options obtained for a core i7 processor, to the same relative options used on a Pentium4 to determine whether the GNU project has improved its performance tuning capabilities for specific hardware over time.
Parameters Optimization of Synergetic Recognition Approach
GAOJun; DONGHuoming; SHAOJing; ZHAOJing
2005-01-01
Synergetic pattern recognition is a novel and effective pattern recognition method, and has some advantages in image recognition. Researches have shown that attention parameters λ and parameters B， C directly influence on the recognition results, but there is no general research theory to control these parameters in the recognition process. We abstractly analyze these parameters in this paper, and purpose a novel parameters optimization method based on simulated annealing algorithm. SA algorithm has good optimization performance and is used to search the global optimized solution of these parameters. Theoretic analysis and experimental results both show that the proposed parameters optimization method is effective, which can fully improve the performance of synergetic recognition approach, and the algorithm realization is simple and fast.
Video Superresolution via Parameter-Optimized Particle Swarm Optimization
2014-01-01
Video superresolution (VSR) aims to reconstruct a high-resolution video sequence from a low-resolution sequence. We propose a novel particle swarm optimization algorithm named as parameter-optimized multiple swarms PSO (POMS-PSO). We assessed the optimization performance of POMS-PSO by four standard benchmark functions. To reconstruct high-resolution video, we build an imaging degradation model. In view of optimization, VSR is converted to an optimization computation problem. And we take POMS...
QUADRATIC OPTIMIZATION METHOD AND ITS APPLICATION ON OPTIMIZING MECHANISM PARAMETER
ZHAO Yun; CHEN Jianneng; YU Yaxin; YU Gaohong; ZHU Jianping
2006-01-01
In order that the mechanism designed meets the requirements of kinematics with optimal dynamics behaviors, a quadratic optimization method is proposed based on the different characteristics of kinematic and dynamic optimization. This method includes two steps of optimization, that is, kinematic and dynamic optimization. Meanwhile, it uses the results of the kinematic optimization as the constraint equations of dynamic optimization. This method is used in the parameters optimization of transplanting mechanism with elliptic planetary gears of high-speed rice seedling transplanter with remarkable significance. The parameters spectrum, which meets to the kinematic requirements, is obtained through visualized human-computer interactions in the kinematics optimization, and the optimal parameters are obtained based on improved genetic algorithm in dynamic optimization. In the dynamic optimization, the objective function is chosen as the optimal dynamic behavior and the constraint equations are from the results of the kinematic optimization. This method is suitable for multi-objective optimization when both the kinematic and dynamic performances act as objective functions.
Optimal Parameters Multicomponent Mixtures Extruding
Ramil F. Sagitov
2013-01-01
Full Text Available Experimental research of multicomponent mixtures extruding from production wastes are carried out, unit for production of composites from different types of waste is presented. Having analyzed dependence of multicomponent mixtures extruding energy requirements on die length and components content at three values of angular rate of screw rotation, we received the values of energy requirements at optimal length of the die, angular speed and percent of binding additives.
Optimization Tools For Automated Vehicle Systems
Shiller, Zvi
1995-01-01
This work focuses on computing time-optimal maneuvers which might be used to develop strategies for emergency maneuvers and establishing the vehicle' s performance envelope. The problem of emergency maneuvers is addressed in the context of time optimal control. Time optimal trajectories are computed along specified paths for a nonlinear vehicle model, which considers both lateral and longitudinal motions.
Optimization of submerged vane parameters
H SHARMA; B JAIN; Z AHMAD
2016-03-01
Submerged vanes are airfoils which are in general placed at certain angle with respect to the flow direction in a channel to induce artificial circulations downstream. By virtue of these artificially generated circulations, submerged vanes were utilized to protect banks of rivers against erosion, to control shifting of rivers, to avoid blocking of lateral intake with sediment deposition, etc. Odgaard and his associates have experimentally obtained the optimum vane sizes and recommended that it can be used for vane design. Thispaper is an attempt to review and validate the findings of Odgaard and his associates by utilizing computational fluid dynamics and experiments as a tool in which the vane generated vorticity in the downstream was maximized in order to obtain optimum vane parameters for single and multiple vane arrays.
Automated firewall analytics design, configuration and optimization
Al-Shaer, Ehab
2014-01-01
This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author
Automated beam steering using optimal control
We present a steering algorithm which, with the aid of a model, allows the user to specify beam behavior throughout a beamline, rather than just at specified beam position monitor (BPM) locations. The model is used primarily to compute the values of the beam phase vectors from BPM measurements, and to define cost functions that describe the steering objectives. The steering problem is formulated as constrained optimization problem; however, by applying optimal control theory we can reduce it to an unconstrained optimization whose dimension is the number of control signals.
Optimization of parameters of heat exchangers vehicles
Andrei MELEKHIN; Aleksandr MELEKHIN
2014-01-01
The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have d...
Cosmological parameter estimation using Particle Swarm Optimization
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite
Mixed integer evolution strategies for parameter optimization.
Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C
2013-01-01
Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems. PMID:22122384
Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows
Tianhong Song
2014-10-01
Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.
Automated inference procedure for the determination of cell growth parameters
Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.
2016-01-01
The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.
Optimization of parameters of heat exchangers vehicles
Andrei MELEKHIN
2014-09-01
Full Text Available The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.
Automated Multivariate Optimization Tool for Energy Analysis: Preprint
Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.
2006-07-01
Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.
Analysis, Model Parameter Extraction and Optimization of Planar Inductors Using MATLAB
Gadjeva, Elissaveta; Durev, Vladislav; Hristov, Marin
2010-01-01
The extended possibilities of the general-purpose software MATLAB for modeling, simulation and optimization can be successfully used in RF microelectronic circuit design. Based on description of the device models, various optimization problems can be solved. Automated model parameter extraction procedure for on-chip wide-band spiral inductor model has been developed and realized in the MATLAB environment. The obtained results for the simulated two-port Y- and S-parameters of the spiral induct...
Zheng, Yuanjie; Wang, Yan; Keller, Brad M.; Conant, Emily; Gee, James C.; Kontos, Despina
2013-02-01
Estimating a woman's risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.
Rohe Peter
2012-10-01
Full Text Available Abstract Background High-throughput methods are widely-used for strain screening effectively resulting in binary information regarding high or low productivity. Nevertheless achieving quantitative and scalable parameters for fast bioprocess development is much more challenging, especially for heterologous protein production. Here, the nature of the foreign protein makes it impossible to predict the, e.g. best expression construct, secretion signal peptide, inductor concentration, induction time, temperature and substrate feed rate in fed-batch operation to name only a few. Therefore, a high number of systematic experiments are necessary to elucidate the best conditions for heterologous expression of each new protein of interest. Results To increase the throughput in bioprocess development, we used a microtiter plate based cultivation system (Biolector which was fully integrated into a liquid-handling platform enclosed in laminar airflow housing. This automated cultivation platform was used for optimization of the secretory production of a cutinase from Fusarium solani pisi with Corynebacterium glutamicum. The online monitoring of biomass, dissolved oxygen and pH in each of the microtiter plate wells enables to trigger sampling or dosing events with the pipetting robot used for a reliable selection of best performing cutinase producers. In addition to this, further automated methods like media optimization and induction profiling were developed and validated. All biological and bioprocess parameters were exclusively optimized at microtiter plate scale and showed perfect scalable results to 1 L and 20 L stirred tank bioreactor scale. Conclusions The optimization of heterologous protein expression in microbial systems currently requires extensive testing of biological and bioprocess engineering parameters. This can be efficiently boosted by using a microtiter plate cultivation setup embedded into a liquid-handling system, providing more throughput
Automated process parameters tuning for an injection moulding machine with soft computing§
Peng ZHAO; Jian-zhong FU; Hua-min ZHOU; Shu-biao CUI
2011-01-01
In injection moulding production, the tuning of the process parameters is a challenging job, which relies heavily on the experience of skilled operators. In this paper, taking into consideration operator assessment during moulding trials, a novel intelligent model for automated tuning of process parameters is proposed. This consists of case based reasoning (CBR), empirical model (EM), and fuzzy logic (FL) methods. CBR and EM are used to imitate recall and intuitive thoughts of skilled operators,respectively, while FL is adopted to simulate the skilled operator optimization thoughts. First, CBR is used to set up the initial process parameters. If CBR fails, EM is employed to calculate the initial parameters. Next, a moulding trial is performed using the initial parameters. Then FL is adopted to optimize these parameters and correct defects repeatedly until the moulded part is found to be satisfactory. Based on the above methodologies, intelligent software was developed and embedded in the controller of an injection moulding machine. Experimental results show that the intelligent software can be effectively used in practical production, and it greatly reduces the dependence on the experience of the operators.
Video Superresolution via Parameter-Optimized Particle Swarm Optimization
Yunyi Yan
2014-01-01
Full Text Available Video superresolution (VSR aims to reconstruct a high-resolution video sequence from a low-resolution sequence. We propose a novel particle swarm optimization algorithm named as parameter-optimized multiple swarms PSO (POMS-PSO. We assessed the optimization performance of POMS-PSO by four standard benchmark functions. To reconstruct high-resolution video, we build an imaging degradation model. In view of optimization, VSR is converted to an optimization computation problem. And we take POMS-PSO as an optimization method to solve the VSR problem, which overcomes the poor effect, low accuracy, and large calculation cost in other VSR algorithms. The proposed VSR method does not require exact movement estimation and does not need the computation of movement vectors. In terms of peak signal-to-noise ratio (PSNR, sharpness, and entropy, the proposed VSR method based POMS-PSO showed better objective performance. Besides objective standard, experimental results also proved the proposed method could reconstruct high-resolution video sequence with better subjective quality.
Orndorff-Plunkett, Franklin
2011-05-01
The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.
The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.
An automated system for measuring parameters of nematode sinusoidal movement
Stirbl Robert C
2005-02-01
Full Text Available Abstract Background Nematode sinusoidal movement has been used as a phenotype in many studies of C. elegans development, behavior and physiology. A thorough understanding of the ways in which genes control these aspects of biology depends, in part, on the accuracy of phenotypic analysis. While worms that move poorly are relatively easy to describe, description of hyperactive movement and movement modulation presents more of a challenge. An enhanced capability to analyze all the complexities of nematode movement will thus help our understanding of how genes control behavior. Results We have developed a user-friendly system to analyze nematode movement in an automated and quantitative manner. In this system nematodes are automatically recognized and a computer-controlled microscope stage ensures that the nematode is kept within the camera field of view while video images from the camera are stored on videotape. In a second step, the images from the videotapes are processed to recognize the worm and to extract its changing position and posture over time. From this information, a variety of movement parameters are calculated. These parameters include the velocity of the worm's centroid, the velocity of the worm along its track, the extent and frequency of body bending, the amplitude and wavelength of the sinusoidal movement, and the propagation of the contraction wave along the body. The length of the worm is also determined and used to normalize the amplitude and wavelength measurements. To demonstrate the utility of this system, we report here a comparison of movement parameters for a small set of mutants affecting the Go/Gq mediated signaling network that controls acetylcholine release at the neuromuscular junction. The system allows comparison of distinct genotypes that affect movement similarly (activation of Gq-alpha versus loss of Go-alpha function, as well as of different mutant alleles at a single locus (null and dominant negative alleles
Mazinan, A H; Shakhesi, S
2016-05-01
The research attempts to deal with the autonomous space systems incorporating new automated maneuvers strategies in the presence of parameters uncertainties. The main subject behind the investigation is to realize the high-resolution small amplitude orbital maneuvers via the first control strategy. And subsequently to realize the large amplitude orbital maneuvers via the second control strategy, as well. There is a trajectory optimization to provide the three-axis referenced commends for the aforementioned overactuated autonomous space system to be able to transfer from the initial orbit to its final ones, in finite burn, as long as the uncertainties of key parameters of the system such as the thrust vector, the center of the gravity, the moments of the inertia and so on are taken into real consideration. The strategies performances are finally considered through a series of experiments and a number of benchmarks to be tangibly verified. PMID:26895709
RTLS entry load relief parameter optimization
Crull, T. J.
1975-01-01
The results are presented of a study of a candidate load relief control law for use during the pullup phase of Return-to-Launch-Site (RTLS) abort entries. The control law parameters and cycle time which optimized performance of the normal load factor limiting phase (load relief phase) of an RTLS entry are examined. A set of control law gains, a smoothing parameter, and a normal force coefficient curve fit are established which resulted in good load relief performance considering the possible aerodynamic coefficient uncertainties defined. Also, the examination of various guidance cycle times revealed improved load relief performance with decreasing cycle time. A .5 second cycle provided smooth and adequate load relief in the presence of all the aerodynamic uncertainties examined.
Optimal deadlock avoidance Petri net supervisors for automated manufacturing systems
Keyi XING; Feng TIAN; Xiaojun YANG
2007-01-01
Deadlock avoidance problems are investigated for automated manufacturing systems with flexible routings.Based on the Petri net models of the systems, this paper proposes, for the first time, the concept of perfect maximal resourcetransition circuits and their saturated states. The concept facilitates the development of system liveness characterization and deadlock avoidance Petri net supervisors. Deadlock is characterized as some perfect maximal resource-transition circuits reaching their saturated states. For a large class of manufacturing systems, which do not contain center resources, the optimal deadlock avoidance Petri net supervisors are presented. For a general manufacturing system, a method is proposed for reducing the system Petri net model so that the reduced model does not contain center resources and, hence, has optimal deadlock avoidance Petri net supervisor. The controlled reduced Petri net model can then be used as the liveness supervisor of the system.
Automated assay optimization with integrated statistics and smart robotics.
Taylor, P B; Stewart, F P; Dunnington, D J; Quinn, S T; Schulz, C K; Vaidya, K S; Kurali, E; Lane, T R; Xiong, W C; Sherrill, T P; Snider, J S; Terpstra, N D; Hertzberg, R P
2000-08-01
The transition from manual to robotic high throughput screening (HTS) in the last few years has made it feasible to screen hundreds of thousands of chemical entities against a biological target in less than a month. This rate of HTS has increased the visibility of bottlenecks, one of which is assay optimization. In many organizations, experimental methods are generated by therapeutic teams associated with specific targets and passed on to the HTS group. The resulting assays frequently need to be further optimized to withstand the rigors and time frames inherent in robotic handling. Issues such as protein aggregation, ligand instability, and cellular viability are common variables in the optimization process. The availability of robotics capable of performing rapid random access tasks has made it possible to design optimization experiments that would be either very difficult or impossible for a person to carry out. Our approach to reducing the assay optimization bottleneck has been to unify the highly specific fields of statistics, biochemistry, and robotics. The product of these endeavors is a process we have named automated assay optimization (AAO). This has enabled us to determine final optimized assay conditions, which are often a composite of variables that we would not have arrived at by examining each variable independently. We have applied this approach to both radioligand binding and enzymatic assays and have realized benefits in both time and performance that we would not have predicted a priori. The fully developed AAO process encompasses the ability to download information to a robot and have liquid handling methods automatically created. This evolution in smart robotics has proven to be an invaluable tool for maintaining high-quality data in the context of increasing HTS demands. PMID:10992042
GA based CNC turning center exploitation process parameters optimization
Z. Car; Barisic, B.; M. Ikonic
2009-01-01
This paper presents machining parameters (turning process) optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA) has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.
GA based CNC turning center exploitation process parameters optimization
Z. Car
2009-01-01
Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.
Parameter optimization model in electrical discharge machining process
无
2008-01-01
Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.
Applications of the Automated SMAC Modal Parameter Extraction Package
An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. The new capabilities of the automated version are demonstrated on test data from a complex shell/payload system. Examples of extractions from impact and shaker data are shown. The automated algorithm extracts 30 to 50 modes in the bandwidth from each column of the frequency response function matrix. Examples of the synthesized Mode Indicator Functions (MIFs) compared with the actual MIFs show the accuracy of the technique. A data set for one input and 170 accelerometer outputs can typically be reduced in an hour. Application to a test with some complex modes is also demonstrated
Optimizing wireless LAN for longwall coal mine automation
Hargrave, C.O.; Ralston, J.C.; Hainsworth, D.W. [Exploration & Mining Commonwealth Science & Industrial Research Organisation, Pullenvale, Qld. (Australia)
2007-01-15
A significant development in underground longwall coal mining automation has been achieved with the successful implementation of wireless LAN (WLAN) technology for communication on a longwall shearer. WIreless-FIdelity (Wi-Fi) was selected to meet the bandwidth requirements of the underground data network, and several configurations were installed on operating longwalls to evaluate their performance. Although these efforts demonstrated the feasibility of using WLAN technology in longwall operation, it was clear that new research and development was required in order to establish optimal full-face coverage. By undertaking an accurate characterization of the target environment, it has been possible to achieve great improvements in WLAN performance over a nominal Wi-Fi installation. This paper discusses the impact of Fresnel zone obstructions and multipath effects on radio frequency propagation and reports an optimal antenna and system configuration. Many of the lessons learned in the longwall case are immediately applicable to other underground mining operations, particularly wherever there is a high degree of obstruction from mining equipment.
MOS PARAMETER EXTRACTION AND OPTIMIZATION WITH GENETIC ALGORITHM
BAŞAK, M.Emin; KUNTMAN, Ayten; Kuntman, Hakan
2010-01-01
Extracting an optimal set of parameter values for a MOS device is great importance in contemporary technology is acomplex problem. Traditional methods of parameter extraction can produce far from optimal solutions because of thepresence of local optimum in the solution space. Genetic algorithms are well suited for finding near optimal solutions inirregular parameter spaces.In this study*, We have applied a genetic algorithm to the problem of device model parameter extraction and are able topr...
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
High dimensional real parameter optimization with teaching learning based optimization
Anima Naik; Suresh Chandra Satapathy; K. Parvathi
2012-01-01
In this paper, a new optimization technique known as Teaching–Learning-Based Optimization (TLBO) is implemented for solving high dimensional function optimization problems. Even though there are several other approaches to address this issue but the cost of computations are more in handling high dimensional problems. In this work we simulate TLBO for high dimensional benchmark function optimizations and compare its results with very widely used alternate techniques like Differential Evolution...
Automated Portfolio Optimization Based on a New Test for Structural Breaks
Tobias Berens
2014-04-01
Full Text Available We present a completely automated optimization strategy which combines the classical Markowitz mean-variance portfolio theory with a recently proposed test for structural breaks in covariance matrices. With respect to equity portfolios, global minimum-variance optimizations, which base solely on the covariance matrix, yield considerable results in previous studies. However, financial assets cannot be assumed to have a constant covariance matrix over longer periods of time. Hence, we estimate the covariance matrix of the assets by respecting potential change points. The resulting approach resolves the issue of determining a sample for parameter estimation. Moreover, we investigate if this approach is also appropriate for timing the reoptimizations. Finally, we apply the approach to two datasets and compare the results to relevant benchmark techniques by means of an out-of-sample study. It is shown that the new approach outperforms equally weighted portfolios and plain minimum-variance portfolios on average.
High dimensional real parameter optimization with teaching learning based optimization
Anima Naik
2012-10-01
Full Text Available In this paper, a new optimization technique known as Teaching–Learning-Based Optimization (TLBO is implemented for solving high dimensional function optimization problems. Even though there are several other approaches to address this issue but the cost of computations are more in handling high dimensional problems. In this work we simulate TLBO for high dimensional benchmark function optimizations and compare its results with very widely used alternate techniques like Differential Evolution (DE and Particle Swarm Optimization (PSO. Results clearly reveal that TLBO is able to address the computational cost issue for all simulated functions to a large dimensions compared to other two techniques.
A Discrete Particle Swarm Optimization to Estimate Parameters in Vision Tasks
Benchikhi Loubna
2016-01-01
Full Text Available The majority of manufacturers demand increasingly powerful vision systems for quality control. To have good outcomes, the installation requires an effort in the vision system tuning, for both hardware and software. As time and accuracy are important, actors are oriented to automate parameter’s adjustment optimization at least in image processing. This paper suggests an approach based on discrete particle swarm optimization (DPSO that automates software setting and provides optimal parameters for industrial vision applications. A novel update functions for our DPSO definition are suggested. The proposed method is applied on some real examples of quality control to validate its feasibility and efficiency, which shows that the new DPSO model furnishes promising results.
Optimalization of selected RFID systems Parameters
Peter Vestenicky
2004-01-01
Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.
Optimizing RF gun cavity geometry within an automated injector design system
Alicia Hofler ,Pavel Evtushenko
2011-03-28
RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability because EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.
The structure of optimal parameters for image restoration problems
de los Reyes, J. C.; Sch?nlieb, C. B.; Valkonen, T.
2015-01-01
We study the qualitative properties of optimal regularisation parameters in variational models for image restoration. The parameters are solutions of bilevel optimisation problems with the image restoration problem as constraint. A general type of regulariser is considered, which encompasses total variation (TV), total generalized variation (TGV) and infimal-convolution total variation (ICTV). We prove that under certain conditions on the given data optimal parameters derived by bilevel optim...
A critical analysis of parameter adaptation in ant colony optimization
PELLEGRINI, Paola; Stützle, Thomas; Birattari, Mauro
2012-01-01
Applying parameter adaptation means operating on parameters of an algorithm while it is tackling an instance. For ant colony optimization, several parameter adaptation methods have been proposed. In the literature, these methods have been shown to improve the quality of the results achieved in some particular contexts. In particular, they proved to be successful when applied to novel ant colony optimization algorithms for tackling problems that are not a classical testbed for optimization alg...
Automated Modal Parameter Estimation of Civil Engineering Structures
Andersen, Palle; Brincker, Rune; Goursat, Maurice; Mevel, Laurent
2007-01-01
In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation-driven Stochastic Subspace Identification (SSI) technique. Finally, the techniques are demonstrated on real data
Automated Modal Parameter Estimation of Civil Engineering Structures
Andersen, Palle; Brincker, Rune; Goursat, Maurice;
In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation-dri......-driven Stochastic Subspace Identification (SSI) technique. Finally, the techniques are demonstrated on real data...
Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants
Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time
A New Approach for Parameter Optimization in Land Surface Model
LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin
2011-01-01
In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.
Optimization parameters system maintenance transport aircraft
І.І. Ліннік
2006-01-01
Full Text Available The algorithm of unconditional and conditional optimization Markov models of maintenance systems of transport airplanes of their programs of technical operation used at improvement is considered.
Parameter Optimization Based on GA and HFSS
SUN Shu-hui; WANG Bing-zhong
2005-01-01
A new project based on genetic algorithm (GA) and high frequency simulation software (HFSS) is proposed to optimize microwave passive devices effectively. This project is realized with a general program named as optimization program. The program is compiled by Matlab and the macro language of HFSS which is a fast and effective way to accomplish tasks. In the paper, two examples are used to show the project's feasibility.
The Robustness Optimization of Parameter Estimation in Chaotic Control Systems
Zhen Xu
2014-10-01
Full Text Available Standard particle swarm optimization algorithm has problems of bad adaption and weak robustness in the parameter estimation model of chaotic control systems. In light of this situation, this paper puts forward a new estimation model based on improved particle swarm optimization algorithm. It firstly constrains the search space of the population with Tent and Logistic double mapping to regulate the initialized population size, optimizes the fitness value by evolutionary state identification strategy so as to avoid its premature convergence, optimizes the inertia weight by the nonlinear decrease strategy to reach better global and local optimal solution, and then optimizes the iteration of particle swarm optimization algorithm with the hybridization concept from genetic algorithm. Finally, this paper applies it into the parameter estimation of chaotic systems control. Simulation results show that the proposed parameter estimation model shows higher accuracy, anti-noise ability and robustness compared with the model based on standard particle swarm optimization algorithm.
Automated Estimation of the Orbital Parameters of Jupiter's Moons
Western, Emma; Ruch, Gerald T.
2016-01-01
Every semester the Physics Department at the University of St. Thomas has the Physics 104 class complete a Jupiter lab. This involves taking around twenty images of Jupiter and its moons with the telescope at the University of St. Thomas Observatory over the course of a few nights. The students then take each image and find the distance from each moon to Jupiter and plot the distances versus the elapsed time for the corresponding image. Students use the plot to fit four sinusoidal curves of the moons of Jupiter. I created a script that automates this process for the professor. It takes the list of images and creates a region file used by the students to measure the distance from the moons to Jupiter, a png image that is the graph of all the data points and the fitted curves of the four moons, and a csv file that contains the list of images, the date and time each image was taken, the elapsed time since the first image, and the distances to Jupiter for Io, Europa, Ganymede, and Callisto. This is important because it lets the professor spend more time working with the students and answering questions as opposed to spending time fitting the curves of the moons on the graph, which can be time consuming.
Architecture of Automated Database Tuning Using SGA Parameters
Hitesh KUMAR SHARMA
2012-05-01
Full Text Available Business Data always growth from kilo byte, mega byte, giga byte, tera byte, peta byte, and so far. There is no way to avoid this increasing rate of data till business still running. Because of this issue, database tuning be critical part of a information system. Tuning a database in a cost-effective manner is a growing challenge. The total cost of ownership (TCO of information technology needs to be significantly reduced by minimizing people costs. In fact, mistakes in operations and administration of information systems are the single most reasons for system outage and unacceptable performance [3]. One way of addressing the challenge of total cost of ownership is by making information systems more self-managing. A particularly difficult piece of the ambitious vision of making database systems self-managing is the automation of database performance tuning. In this paper, we will explain the progress made thus far on this important problem. Specifically, we will propose the architecture and Algorithm for this problem.
Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area Az under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost surface
Simultaneous optimal experimental design for in vitro binding parameter estimation.
Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C
2013-10-01
Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088
Optimization of the main parameters of the subsoil irrigation systems
Elena Akytneva; Askar Akhmedov
2014-01-01
This article discusses the issues of optimization of the basic parameters of soil irrigation systems with application of the plan of Regardera second order. The obtained optimal parameters of soil irrigation systems that can be used for designing and construction of this method of irrigation.
An Automated Tool for Optimization of FMS Scheduling With Meta Heuristic Approach
A. V. S. Sreedhar Kumar
2014-03-01
Full Text Available The evolutions of manufacturing systems have reflected the need and requirement of the market which varies from time to time. Flexible manufacturing systems have contributed a lot to the development of efficient manufacturing process and production of variety of customized limited volume products as per the market demand based on customer needs. Scheduling of FMS is a crucial operation in maximizing throughput, reducing the wastages and increasing the overall efficiency of the manufacturing process. The dynamic nature of the Flexible Manufacturing Systems makes them unique and hence a generalized solution for scheduling is difficult to be abstracted. Any Solution for optimizing the scheduling should take in to account a multitude of parameters before proposing any solution. The primary objective of the proposed research is to design a tool to automate the optimization of scheduling process by searching for solution in the search spaces using Meta heuristic approaches. The research also validates the use of reward as means for optimizing the scheduling by including it as one of the parameters in the Combined Objective Function.
Adaptive Parameters for a Modified Comprehensive Learning Particle Swarm Optimizer
Yu-Jun Zheng; Hai-Feng Ling; Qiu Guan
2012-01-01
Particle swarm optimization (PSO) is a stochastic optimization method sensitive to parameter settings. The paper presents a modification on the comprehensive learning particle swarm optimizer (CLPSO), which is one of the best performing PSO algorithms. The proposed method introduces a self-adaptive mechanism that dynamically changes the values of key parameters including inertia weight and acceleration coefficient based on evolutionary information of individual particles and the swarm during ...
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Identifying Model Parameters of Semiconductor Devices Using Optimization Techniques
Hruškovič, Lubomir; Grabner, Martin; Dobeš, Josef
2007-01-01
The optimization is an indispensable tool for extracting the parameters of any complicated models. Hence, advanced optimization techniques are also necessary for identifying the model parameters of semiconductor devices because their current models are very sophisticated (especially the BJT and MOSFET ones). The equations of such models contain typically one hundred parameters. Therefore, the measurement and particularly identification of the full set of the model para...
The reliability parameters definition in radioelectronic devices automated designing systems
Yu. F. Zinkovskiy
2012-11-01
Full Text Available The reliability parameters calculating problems for radioelectronic devices determined by thermal modes are considered. It is shown that such calculations should be based on temperature definition methods for separate components of radio engineering device (RED electronic structure. The thermal modes calculating methods for electronic blocks, cells, microassemblies are considered. The analytical models may be used for the average temperatures of cells in the block; the heat exchange equations system is proposed for radio component temperature estimation on the cell plate; the analytical solution is offered for microassembly temperature estimation. The analytical mathematical models for reliability indexes calculations of radio components and whole RED are determined.
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
Automated design and optimization of flexible booster autopilots via linear programming, volume 1
Hauser, F. D.
1972-01-01
A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.
Temporal Parameter Optimization in Four-Dimensional Flash Trajectory Imaging
In four-dimensional flash trajectory imaging, temporal parameters include time delay, laser pulse width, gate time, pulse pair repetition frequency and the frame rate of CCD, which directly impact on the acquisition of target trajectories over time. We propose a method of optimizing the temporal parameters of flash trajectory imaging. All the temporal parameters can be estimated by the spatial parameters of the volumes of interest, target scale and velocity, and target sample number. The formulae for optimizing temporal parameters are derived, and the method is demonstrated in an experiment with a ball oscillating as a pendulum. (general)
Axdahl, Erik L.
2015-01-01
Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.
Tan Chan Sin
2014-01-01
Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.
Md. Ahsanul Hoque
2015-09-01
Full Text Available Antenna alignment is very cumbersome in telecommunication industry and it especially affects the MW links due to environmental anomalies or physical degradation over a period of time. While in recent years a more conventional approach of redundancy has been employed but to ensure the LOS link stability, novel automation techniques are needed. The basic principle is to capture the desired Received Signal Level (RSL by means of an outdoor unit installed on tower top and analyzing the RSL in indoor unit by means of a GUI interface. We have proposed a new smart antenna system where automation is initiated when the transceivers receive low signal strength and report the finding to processing comparator unit. Series architecture is used that include loop antenna, RCX Robonics, LabVIEW interface coupled with a tunable external controller. Denavit–Hartenberg parameters are used in analytical modeling and numerous control techniques have been investigated to overcome imminent overshoot problems for the transport link. With this novel approach, a solution has been put forward for the communication industry where any antenna could achieve optimal directivity for desired RSL with low overshoot and fast steady state response.
Review of Automated Design and Optimization of MEMS
Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca
2007-01-01
In recent years MEMS saw a very rapid development. Although many advances have been reached, due to the multiphysics nature of MEMS, their design is still a difficult task carried on mainly by hand calculation. In order to help to overtake such difficulties, attempts to automate MEMS design were...... carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated....
Structural Parameter Optimization of Multilayer Conductors in HTS Cable
Yan Mao; Jie Qiu; Xin-Ying Liu; Zhi-Xuan Wang; Shu-Hong Wang; Jian-Guo Zhu; You-Guang Guo; Zhi-Wei Lin; Jian-Xun Jin
2008-01-01
In this paper, the design optimization of the structural parameters of multilayer conductors in high temperature superconducting (HTS) cable is reviewed. Various optimization methods, such as the particle swarm optimization (PSO), the genetic algorithm (GA), and a robust optimization method based on design for six sigma (DFSS), have been applied to realize uniform current distribution among the multi- layer HTS conductors. The continuous and discrete variables, such as the winding angle, radius, and winding direction of each layer, are chosen as the design parameters. Under the constraints of the mechanical properties and critical current, PSO is proven to be a more powerful tool than GA for structural parameter optimization, and DFSS can not only achieve a uniform current distribution, but also improve significantly the reliability and robustness of the HTS cable quality.
Fursin, Grigori
2009-01-01
International audience Computing systems rarely deliver best possible performance due to ever increasing hardware and software complexity and limitations of the current optimization technology. Additional code and architecture optimizations are often required to improve execution time, size, power consumption, reliability and other important characteristics of computing systems. However, it is often a tedious, repetitive, isolated and time consuming process. In order to automate, simplify ...
OPTIMIZATION OF PARAMETERS OF ELEMENTS COMPUTER SYSTEM
Nesterov G. D.
2016-03-01
Full Text Available The work is devoted to the topical issue of increasing the productivity of computers. It has an experimental character. Therefore, the description of a number of the carried-out tests and the analysis of their results is offered. Previously basic characteristics of modules of the computer for the regular mode of functioning are provided in the article. Further the technique of regulating their parameters in the course of experiment is described. Thus the special attention is paid to observing the necessary thermal mode in order to avoid an undesirable overheat of the central processor. Also, operability of system in the conditions of the increased energy consumption is checked. The most responsible moment thus is regulating the central processor. As a result of the test its optimum tension, frequency and delays of data reading from memory are found. The analysis of stability of characteristics of the RAM, in particular, a condition of its tires in the course of experiment is made. As the executed tests took place within the standard range of characteristics of modules, and, therefore, the margin of safety put in the computer and capacity of system wasn't used, further experiments were made at extreme dispersal in the conditions of air cooling. The received results are also given in the offered article
Setting of the Optimal Parameters of Melted Glass
Luptáková, Natália; Matejíčka, L.; Krečmer, N.
2015-01-01
Roč. 10, č. 1 (2015), s. 73-79. ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass
Optimal z-axis scanning parameters for gynecologic cytology specimens
Amber D Donnelly; Mukherjee, Maheswari S.; Lyden, Elizabeth R.; Bridge, Julia A.; Subodh M Lele; Najia Wright; Mary F McGaughey; Culberson, Alicia M.; Adam J. Horn; Whitney R Wedel; Stanley J Radio
2013-01-01
Background: The use of virtual microscopy (VM) in clinical cytology has been limited due to the inability to focus through three dimensional (3D) cell clusters with a single focal plane (2D images). Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN) specimens prepared on SurePath™ glass slides while m...
Integral Optimization of Systematic Parameters of Flip-Flow Screens
翟宏新
2004-01-01
The synthetic index Ks for evaluating flip-flow screens is proposed and systematically optimized in view of the whole system. A series of optimized values of relevant parameters are found and then compared with those of the current industrial specifications. The results show that the optimized value Ks approaches the one of those famous flip-flow screens in the world. Some new findings on geometric and kinematics parameters are useful for improving the flip-flow screens with a low Ks value, which is helpful in developing clean coal technology.
Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows
Tianhong Song; Sven Köhler; Bertram Ludäscher; James Hanken; Maureen Kelly; David Lowery; Macklin, James A.; Morris, Paul J.; Morris, Robert A.
2014-01-01
Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfal...
Xing Wu; Peihuang Lou; Dunbing Tang
2011-01-01
This paper presents a multi-objective genetic algorithm (MOGA) with Pareto optimality and elitist tactics for the control system design of automated guided vehicle (AGV). The MOGA is used to identify AGV driving system model and optimize its servo control system sequentially. In system identification, the model identified by least square method is adopted as an evolution tutor who selects the individuals having balanced performances in all objectives as elitists. In controller optimization, t...
Genetic Algorithm Optimizes Q-LAW Control Parameters
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
Screening of optimization parameters for mixing process via CFD
In this study, the numerical simulation in a mixing vessel agitated by a six bladed Rushton turbine has been carried out to investigate the effects of effective parameters to the mixing process. The study is intended to screen the potential parameters which affect the optimization process and to provide the detail insights into the process. Three-dimensional and steady-state flow has been performed using the fully predictive Multiple Reference Frame (MRF) technique for the impeller and tank geometry. Process optimization is always used to ensure the optimum conditions are fulfilled to attain industries satisfaction or needs (for example; increase profit, low cost, yields, others). In this study, the range of recommended speed to accelerate optimization is 100, 150 and 200 rpm respectively and the range of recommended clearance is 50, 75 and 100 mm respectively for dual Rushton impeller. Thus, the computer fluid dynamics (CFD) was introduced in order to screen the suitable parameters efficiently and to accelerate optimization. (Author)
Optimal filtering, parameter tracking, and control of nonlinear nuclear reactors
This paper presents a new formulation of a class of nonlinear optimal control problems in which the system's signals are noisy and some system parameters are changing arbitrarily with time. The methodology is validated with an application to a nonlinear nuclear reactor model. A variational technique based on Pontryagin's maximum principle is used to filter the noisy signals, estimate the time-varying parameters, and calculate the optimal controls. The reformulation of the variational technique as an initial value problem allows this microprocessor-based algorithm to perform on-line filtering, parameter tracking, and control
Optimizing hadoop parameter settings with gene expression programming guided PSO
Huang, Z; Li, M; Taylor, GA; Khan, M
2016-01-01
Hadoop MapReduce has become a major computing technology in support of big data analytics. The Hadoop framework has over 190 configuration parameters, and some of them can have a significant effect on the performance of a Hadoop job. Manually tuning the optimum or near optimum values of these parameters is a challenging task and also a time consuming process. This paper optimizes the performance of Hadoop by automatically tuning its configuration parameter settings. The proposed work first em...
Optimizing chirped laser pulse parameters for electron acceleration in vacuum
Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza; Massudi, Reza, E-mail: r-massudi@sbu.ac.ir [Laser and Plasma Research Institute, Shahid Beheshti University, Tehran 1983969411 (Iran, Islamic Republic of)
2015-11-14
Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.
Samarin Oleg Dmitrievich
Full Text Available Currently, the successful development of construction industry depends on the improved energy performance of buildings, structures and facilities, as well as on the quality assurance of the indoor climate. In view of the above, designing and operation of buildings should be aimed at the best (optimal solution of the following objective: to ensure the set-point values of indoor climate serviced by automated climate control systems, against the minimal energy consumption. In regard of its substantive structure, this paper describes the study on the relationship between the individual parameters of indoor thermal stability and the regulatory impact of automatic control systems (ACS. We analyzed the effect of structural room characteristics on the total energy consumption of the airflow processing unit in order to ensure energy saving. The final result is illustrated by numeric simulation with the use of a developed computer program and graphic examples. The proposed method is based on the assumption that the total thermal stability of the «room-ACVS-ACS» system is defined by heat absorption index of a room and the ACS control operation. This follows directly from the back-to-back connection of units corresponding to the room and ACVS in the scheme of automatic indoor climate control. Further study allowed authors to trace the influence of structural characteristics of a room on the total energy consumption needed for air intake treatment. This can be done by applying values of the main walling area. Basing on the developed algorithm, the authors made calculations using the computer program developed in Fortran. As a result a fragments of the program are presented - calculations of the parameters’ values included in the expressions and the total specific energy consumption for heating the air intake during the heating season, under varying room geometry, as well as the graphic illustration of the obtained relationships.
Genetic algorithm parameter optimization: applied to sensor coverage
Sahin, Ferat; Abbate, Giuseppe
2004-08-01
Genetic Algorithms are powerful tools, which when set upon a solution space will search for the optimal answer. These algorithms though have some associated problems, which are inherent to the method such as pre-mature convergence and lack of population diversity. These problems can be controlled with changes to certain parameters such as crossover, selection, and mutation. This paper attempts to tackle these problems in GA by having another GA controlling these parameters. The values for crossover parameter are: one point, two point, and uniform. The values for selection parameters are: best, worst, roulette wheel, inside 50%, outside 50%. The values for the mutation parameter are: random and swap. The system will include a control GA whose population will consist of different parameters settings. While this GA is attempting to find the best parameters it will be advancing into the search space of the problem and refining the population. As the population changes due to the search so will the optimal parameters. For every control GA generation each of the individuals in the population will be tested for fitness by being run through the problem GA with the assigned parameters. During these runs the population used in the next control generation is compiled. Thus, both the issue of finding the best parameters and the solution to the problem are attacked at the same time. The goal is to optimize the sensor coverage in a square field. The test case used was a 30 by 30 unit field with 100 sensor nodes. Each sensor node had a coverage area of 3 by 3 units. The algorithm attempts to optimize the sensor coverage in the field by moving the nodes. The results show that the control GA will provide better results when compared to a system with no parameter changes.
Bartz-Beielstein, Thomas
2010-01-01
The sequential parameter optimization (SPOT) package for R is a toolbox for tuning and understanding simulation and optimization algorithms. Model-based investigations are common approaches in simulation and optimization. Sequential parameter optimization has been developed, because there is a strong need for sound statistical analysis of simulation and optimization algorithms. SPOT includes methods for tuning based on classical regression and analysis of variance techniques; tree-based models such as CART and random forest; Gaussian process models (Kriging), and combinations of different meta-modeling approaches. This article exemplifies how SPOT can be used for automatic and interactive tuning.
Automatic parameter optimizer (APO) for multiple-point statistics
Bani Najar, Ehsanollah; Sharghi, Yousef; Mariethoz, Gregoire
2016-04-01
Multiple Point statistics (MPS) have gained popularity in recent years for generating stochastic realizations of complex natural processes. The main principle is that a training image (TI) is used to represent the spatial patterns to be modeled. One important feature of MPS is that the spatial model of the fields generated is made of 1) the chosen TI and 2) a set of algorithmic parameters that are specific to each MPS algorithm. While the choice of a training image can be guided by expert knowledge (e.g. for geological modeling) or by data acquisition methods (e.g. remote sensing) determining the algorithmic parameters can be more challenging. To date, only specific guidelines have been proposed for some simulation methods, and a general parameters inference methodology is still lacking, in particular for complex modeling settings such as when using multivariate training images. The common practice consists in carrying out an extensive parameters sensitivity analysis which can be cumbersome. An additional complexity is that the algorithmic parameters do influence CPU cost, and therefore finding optimal parameters is not only a modeling question, but also a computational challenge. To overcome these issues, we propose the automatic parameter optimizer (MPS-APO), a generic method based on stochastic optimization to rapidly determine acceptable parameters, in different settings and for any MPS method. The MPS automatic parameter optimizer proceeds in a 2-step approach. In the first step, it considers the set of input parameters of a given MPS algorithm and formulates an objective function that quantifies the reproduction of spatial patterns. The Simultaneous Perturbation Stochastic Approximation (SPSA) optimization method is used to minimize the objective function. SPSA is chosen because it is able to deal with the stochastic nature of the objective function and for its computational efficiency. At each iteration, small gaps are randomly placed in the input image
APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION
N. Belavendram
2010-12-01
Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.
An Automated Tool for Optimizing Waste Transportation Routing and Scheduling
An automated software tool has been developed and implemented to increase the efficiency and overall life-cycle productivity of site cleanup by scheduling vehicle and container movement between waste generators and disposal sites on the Department of Energy's Oak Ridge Reservation. The software tool identifies the best routes or accepts specifically requested routes and transit times, looks at fleet availability, selects the most cost effective route for each waste stream, and creates a transportation schedule in advance of waste movement. This tool was accepted by the customer and has been implemented. (authors)
Akatsuki eKimura
2015-03-01
Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Parameter estimation for chaotic systems by particle swarm optimization
Parameter estimation for chaotic systems is an important issue in nonlinear science and has attracted increasing interests from various research fields, which could be essentially formulated as a multi-dimensional optimization problem. As a novel evolutionary computation technique, particle swarm optimization (PSO) has attracted much attention and wide applications, owing to its simple concept, easy implementation and quick convergence. However, to the best of our knowledge, there is no published work on PSO for estimating parameters of chaotic systems. In this paper, a PSO approach is applied to estimate the parameters of Lorenz system. Numerical simulation and the comparisons demonstrate the effectiveness and robustness of PSO. Moreover, the effect of population size on the optimization performances is investigated as well
Parameter optimization of pharmacokinetics based on artificial immune network
LIU Li; ZHOU Shao-dan; LU Hong-wen; XIE Fen; XU Wen-bo
2008-01-01
A new method for parameter optimization of pharmacokinetics based on an artificial immune network named PKAIN is proposed.To improve local searching ability of the artificial immune network,a partition-based concurrent simplex mutation is developed.By means of evolution of network cells in the PKAIN artificial immune network,an optimal set of parameters of a given pharmacokinetic model is obtained.The Laplace transform is applied to the pharmacokinetic difierential equations of remifentanil and its major metabolite,remifentanil acid.The PKAIN method is used to optimize parameters of the derived compartment models.Experimental results show that the twocompartment model is sufficient for the pharmacokinetic study of remifentanil acid for patients with mild degree of renal impairment.
Complicated problem solution techniques in optimal parameter searching
An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs
Damage localization using experimental modal parameters and topology optimization
Niemann, Hanno; Morlier, Joseph; Shahdin, Amir; Gourinat, Yves
2010-01-01
This work focuses on the developement of a damage detection and localization tool using the Topology Optimization feature of MSC.Nastran. This approach is based on the correlation of a local stiness loss and the change in modal parameters due to damages in structures. The loss in stiness is accounted by the Topology Optimization approach for updating undamaged numerical models towards similar models with embedded damages. Hereby, only a mass penalization and the changes in experimentally obta...
Aerodynamic optimization by simultaneously updating flow variables and design parameters
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Automated Finite Element Modeling of Wing Structures for Shape Optimization
Harvey, Michael Stephen
1993-01-01
The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.
Rumbell, Timothy H; Draguljić, Danel; Yadav, Aniruddha; Hof, Patrick R; Luebke, Jennifer I; Weaver, Christina M
2016-08-01
Conductance-based compartment modeling requires tuning of many parameters to fit the neuron model to target electrophysiological data. Automated parameter optimization via evolutionary algorithms (EAs) is a common approach to accomplish this task, using error functions to quantify differences between model and target. We present a three-stage EA optimization protocol for tuning ion channel conductances and kinetics in a generic neuron model with minimal manual intervention. We use the technique of Latin hypercube sampling in a new way, to choose weights for error functions automatically so that each function influences the parameter search to a similar degree. This protocol requires no specialized physiological data collection and is applicable to commonly-collected current clamp data and either single- or multi-objective optimization. We applied the protocol to two representative pyramidal neurons from layer 3 of the prefrontal cortex of rhesus monkeys, in which action potential firing rates are significantly higher in aged compared to young animals. Using an idealized dendritic topology and models with either 4 or 8 ion channels (10 or 23 free parameters respectively), we produced populations of parameter combinations fitting the target datasets in less than 80 hours of optimization each. Passive parameter differences between young and aged models were consistent with our prior results using simpler models and hand tuning. We analyzed parameter values among fits to a single neuron to facilitate refinement of the underlying model, and across fits to multiple neurons to show how our protocol will lead to predictions of parameter differences with aging in these neurons. PMID:27106692
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Suleimanov, Yury V.; Green, William H.
2015-01-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not on...
OPTIMIZATION OF ELECTROCHEMICAL MACHINING PROCESS PARAMETERS USING TAGUCHI APPROACH
R.Goswami
2013-05-01
Full Text Available this research paper, Taguchi method is applied to find optimum process parameters for Electrochemical machining (ECM. The objective of experimental investigation is to conduct research of machining parameters impact on MRR and SR of work piece of Aluminum and Mild steel . The approach was based on Taguchi’s method, analysis of variance and signal to noise ratio (S/N Ratio to optimize the Electrochemical machining process parameters for effective machining and to predict the optimal choice for each ECM parameter such asvoltage, tool feed and current. In this research three level of parameter is considered for experiment. There is L9 orthogonal array used by varying A,B,C respectively and for each combination we have conducted three experiments and with the help of Signal to Noise ratio we find out the optimum results for ECM. It was confirmed that determined optimal combination of ECM process parameters satisfy the real need for machining of Aluminum and Mild steel in actual practice.
Adeel H. Suhail
2010-01-01
Full Text Available Problem statement: In machining operation, the quality of surface finish is an important requirement for many turned workpieces. Thus, the choice of optimized cutting parameters is very important for controlling the required surface quality. Approach: The focus of present experimental study is to optimize the cutting parameters using two performance measures, workpiece surface temperature and surface roughness. Optimal cutting parameters for each performance measure were obtained employing Taguchi techniques. The orthogonal array, signal to noise ratio and analysis of variance were employed to study the performance characteristics in turning operation. Results: The experimental results showed that the workpiece surface temperature can be sensed and used effectively as an indicator to control the cutting performance and improves the optimization process. Conclusion: Thus, it is possible to increase machine utilization and decrease production cost in an automated manufacturing environment.
Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on
Wrapped Progressive Sampling Search for Optimizing Learning Algorithm Parameters
Bosch, Antal van den
2005-01-01
We present a heuristic meta-learning search method for finding a set of optimized algorithmic parameters for a range of machine learning algo- rithms. The method, wrapped progressive sampling, is a combination of classifier wrapping and progressive sampling of training data. A series of experiments
Introduction of IMRT in Macedonia: optimizing the MLC parameters
Intensity modulated radiotherapy (IMRT) for the Varian Eclipse Treatment Planning System (TPS) requires optimization of the values of two parameters of the Multi Leaf Collimator (MLC) – the transmission of the MLC and the so called Dosimetric Leaf Gap (DLG). This paper describes the optimization of those parameters for one of the linear accelerators at the University Clinic for Radiotherapy and Oncology in Skopje. The starting values for the MLC parameters were determined by dose measurements with ionization chambers. Those measured values were introduced in the TPS and an IMRT test plan was created. The acquired test plan was used for irradiation of the two-dimensional chamber array 'MatriXX', and for comparison of the measured results with the corresponding results calculated by the TPS. By iteratively changing the two MLC parameters we optimized their values, so that the calculation corresponds to the measurement as much as possible. The final results of the optimization were introduced in the TPS thus enabling calculation of IMRT plans and proceed towards the phase of clinical introduction of this radiotherapy technique. (Author)
Alex D Herbert
Full Text Available Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.
Optimization of polyetherimide processing parameters for optical interconnect applications
Zhao, Wei; Johnson, Peter; Wall, Christopher
2015-10-01
ULTEM® polyetherimide (PEI) resins have been used in opto-electronic markets since the optical properties of these materials enable the design of critical components under tight tolerances. PEI resins are the material of choice for injection molded integrated lens applications due to good dimensional stability, near infrared (IR) optical transparency, low moisture uptake and high heat performance. In most applications, parts must be produced consistently with minimal deviations to insure compatibility throughout the lifetime of the part. With the large number of lenses needed for this market, injection molding has been optimized to maximize the production rate. These optimized parameters for high throughput may or may not translate to an optimized optical performance. In this paper, we evaluate and optimize PEI injection molding processes with a focus on optical property performance. A commonly used commercial grade was studied to determine factors and conditions which contribute to optical transparency, color, and birefringence. Melt temperature, mold temperature, injection speed and cycle time were varied to develop optimization trials and evaluate optical properties. These parameters could be optimized to reduce in-plane birefringence from 0.0148 to 0.0006 in this study. In addition, we have studied an optically smooth, sub-10nm roughness mold to re-evaluate material properties with minimal influence from mold quality and further refine resin and process effects for the best optical performance.
AMMOS: Automated Molecular Mechanics Optimization tool for in silico Screening
Pajeva Ilza
2008-10-01
Full Text Available Abstract Background Virtual or in silico ligand screening combined with other computational methods is one of the most promising methods to search for new lead compounds, thereby greatly assisting the drug discovery process. Despite considerable progresses made in virtual screening methodologies, available computer programs do not easily address problems such as: structural optimization of compounds in a screening library, receptor flexibility/induced-fit, and accurate prediction of protein-ligand interactions. It has been shown that structural optimization of chemical compounds and that post-docking optimization in multi-step structure-based virtual screening approaches help to further improve the overall efficiency of the methods. To address some of these points, we developed the program AMMOS for refining both, the 3D structures of the small molecules present in chemical libraries and the predicted receptor-ligand complexes through allowing partial to full atom flexibility through molecular mechanics optimization. Results The program AMMOS carries out an automatic procedure that allows for the structural refinement of compound collections and energy minimization of protein-ligand complexes using the open source program AMMP. The performance of our package was evaluated by comparing the structures of small chemical entities minimized by AMMOS with those minimized with the Tripos and MMFF94s force fields. Next, AMMOS was used for full flexible minimization of protein-ligands complexes obtained from a mutli-step virtual screening. Enrichment studies of the selected pre-docked complexes containing 60% of the initially added inhibitors were carried out with or without final AMMOS minimization on two protein targets having different binding pocket properties. AMMOS was able to improve the enrichment after the pre-docking stage with 40 to 60% of the initially added active compounds found in the top 3% to 5% of the entire compound collection
Parameters Optimization of Low Carbon Low Alloy Steel Annealing Process
Maoyu ZHAO; Qianwang CHEN
2013-01-01
A suitable match of annealing process parameters is critical for obtaining the fine microstructure of material.Low carbon low alloy steel (20CrMnTi) was heated for various durations near Ac temperature to obtain fine pearlite and ferrite grains.Annealing temperature and time were used as independent variables,and material property data were acquired by orthogonal experiment design under intercritical process followed by subcritical annealing process (IPSAP).The weights of plasticity (hardness,yield strength,section shrinkage and elongation) of annealed material were calculated by analytic hierarchy process,and then the process parameters were optimized by the grey theory system.The results observed by SEM images show that microstructure of optimization annealing material are consisted of smaller lamellar pearlites (ferrite-cementite)and refining ferrites which distribute uniformly.Morphologies on tension fracture surface of optimized annealing material indicate that the numbers of dimple fracture show more finer toughness obviously comparing with other annealing materials.Moreover,the yield strength value of optimization annealing material decreases apparently by tensile test.Thus,the new optimized strategy is accurate and feasible.
Novel Approach to Nonlinear PID Parameter Optimization Using Ant Colony Optimization Algorithm
Duan Hai-bin; Wang Dao-bo; Yu Xiu-fen
2006-01-01
This paper presents an application of an Ant Colony Optimization (ACO) algorithm to optimize the parameters in the design of a type of nonlinear PID controller. The ACO algorithm is a novel heuristic bionic algorithm, which is based on the behaviour of real ants in nature searching for food. In order to optimize the parameters of the nonlinear PID controller using ACO algorithm,an objective function based on position tracing error was constructed, and elitist strategy was adopted in the improved ACO algorithm. Detailed simulation steps are presented. This nonlinear PID controller using the ACO algorithm has high precision of control and quick response.
Automation for pattern library creation and in-design optimization
Deng, Rock; Zou, Elain; Hong, Sid; Wang, Jinyan; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua; Huang, Jason
2015-03-01
contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.
An automatic and effective parameter optimization method for model tuning
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Identification of optimal parameter combinations for the emergence of bistability
Májer, Imre; Hajihosseini, Amirhossein; Becskei, Attila
2015-12-01
Bistability underlies cellular memory and maintains alternative differentiation states. Bistability can emerge only if its parameter range is either physically realizable or can be enlarged to become realizable. We derived a general rule and showed that the bistable range of a reaction parameter is maximized by a pair of other parameters in any gene regulatory network provided they satisfy a general condition. The resulting analytical expressions revealed whether or not such reaction pairs are present in prototypical positive feedback loops. They are absent from the feedback loop enclosed by protein dimers but present in both the toggle-switch and the feedback circuit inhibited by sequestration. Sequestration can generate bistability even at narrow feedback expression range at which cooperative binding fails to do so, provided inhibition is set to an optimal value. These results help to design bistable circuits and cellular reprogramming and reveal whether bistability is possible in gene networks in the range of realistic parameter values.
Cosmological parameter estimation using Particle Swarm Optimization (PSO)
Prasad, Jayanti
2011-01-01
Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmo...
Identification of metabolic system parameters using global optimization methods
Gatzke Edward P
2006-01-01
Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.
Using Neural Networks to Tune Heuristic Parameters in Evolutionary Optimization
Holeňa, Martin
Athens : WSEAS Press, 2006 - (Espi, P.; Giron-Sierra, J.; Drigas, A.), s. 1-6 ISBN 960-8457-41-6. [AIKED'06. WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases. Madrid (ES), 15.02.2006-17.02.2006] R&D Projects: GA ČR(CZ) GA201/05/0325 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary optimization * genetic algorithms * heuristic parameters * parameter tuning * artificial neural network s * convergence speed * population diversity Subject RIV: IN - Informatics, Computer Science
Study on optimization of parameters in a biological model
无
2001-01-01
According to the data observed in a China- Japan Joint Investigation, the parameters of an ecosystem dynamics model (Qiao et al., 2000) were optimized. The values of eighteen parameters for the model were obtained, with nutrient haft saturation constant, Kn = 1.4 μmol/dm3, Kp = 0.129 μmol/dm3 and Ks= 1.16μmol/dm3 for the diatom and Kn=0.345μmol/dm3, Kp=0.113 μmol/dm3 for the flagellate. Three proposals to set up a function for this multiple objective problem were discussed in detail.
A Parameter Optimization for a National SASE FEL Facility
The parameter optimization for a national SASE FEL facility was studied. Turkish State Planing Organization (DPT) gave financial support as an inter-universities project to begin technical design studies and test facility of National Accelerator Complex starting from 2006. In addition to a particle factory, the complex will contain a linac based free electron laser, positron ring based synchrotron radiation facilities and a proton accelerator. In this paper, we have given some results of main parameters of SASE FEL facility based on 130 MeV linac, application potential in basic and applied research
Vidyadhar Rao
2016-06-01
Full Text Available Use of current models of Automated Haematology Analysers help in calculating the haemoglobin contents of the mature Red cells, Reticulocytes and percentages of Microcytic and hypochromic Red cells. This has helped the clinician in reaching early diagnosis and management of Different haemopoietic disorders like Iron Deficiency Anaemia, Thalassaemia and anaemia of chronic diseases. AIM This study is conducted using an Automated Haematology Analyser to evaluate anaemia using the Red Cell and Reticulocyte parameters. Three types of anaemia were evaluated; iron deficiency anaemia, anaemia of long duration and anaemia associated with chronic disease and Iron deficiency. MATERIALS AND METHODS The blood samples were collected from 287 adult patients with anaemia differentiated depending upon their iron status, haemoglobinopathies and inflammatory activity. Iron deficiency anaemia (n=132, anaemia of long duration (ACD, (n=97 and anaemia associated with chronic disease with iron deficiency (ACD Combi, (n=58. Microcytic Red cells, hypochromic red cells percentage and levels of haemoglobin in reticulocytes and matured RBCs were calculated. The accuracy of the parameters was analysed using receiver operating characteristic analyser to differentiate between the types of anaemia. OBSERVATIONS AND RESULTS There was no difference in parameters between the iron deficiency group or anaemia associated with chronic disease and iron deficiency. The hypochromic red cells percentage was the best parameter in differentiating anaemia of chronic disease with or without absolute iron deficiency with a sensitivity of 72.7% and a specificity of 70.4%. CONCLUSIONS The parameters of red cells and reticulocytes were of reasonably good indicators in differentiating the absolute iron deficiency anaemia with chronic disease.
Using string invariants for prediction searching for optimal parameters
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Optimization of E. coli Cultivation Model Parameters Using Firefly Algorithm
Olympia Roeva
2012-04-01
Full Text Available In this paper, a novel meta-heuristics algorithm, namely the Firefly Algorithm (FA, is adapted and applied for a model parameter identification of an E. coli fed-batch cultivation process. A system of ordinary nonlinear differential equations is used to model the biomass growth and substrate utilization. Parameter optimization is performed using real experimental data set from an E. coli MC4110 fed-batch cultivation process. The FA adjustments are done based on several pre-tests according to the optimization problem considered here. The simulation results indicate that the applied algorithm is effective and efficient. As a result, a model with high degree of accuracy is obtained applying the FA.
Multidimensional optimization of signal space distance parameters in WLAN positioning.
Brković, Milenko; Simić, Mirjana
2014-01-01
Accurate indoor localization of mobile users is one of the challenging problems of the last decade. Besides delivering high speed Internet, Wireless Local Area Network (WLAN) can be used as an effective indoor positioning system, being competitive both in terms of accuracy and cost. Among the localization algorithms, nearest neighbor fingerprinting algorithms based on Received Signal Strength (RSS) parameter have been extensively studied as an inexpensive solution for delivering indoor Location Based Services (LBS). In this paper, we propose the optimization of the signal space distance parameters in order to improve precision of WLAN indoor positioning, based on nearest neighbor fingerprinting algorithms. Experiments in a real WLAN environment indicate that proposed optimization leads to substantial improvements of the localization accuracy. Our approach is conceptually simple, is easy to implement, and does not require any additional hardware. PMID:24757443
Optimization of the drying parameters of a veneer roller dryer
Marttila, Heikki
2014-01-01
The objective of the master’s thesis was to experimentally find the optimal drying parameters for spruce heartwood veneers in terms of veneer quality and drying capacity. The quality was referred to moisture content, moisture deviation, tensile strength (across the grain direction), surface roughness, wettability, waviness and other visual defects. The strength properties of plywood were excluded from the study. The mill experiments were conducted at UPM Pellos 3 jet roller dryer in June ...
Kinetic parameter estimation from TGA: Optimal design of TGA experiments
Dirion, Jean-Louis; Reverte, Cédric; Cabassud, Michel
2008-01-01
This work presents a general methodology to determine kinetic models of solid thermal decomposition with thermogravimetric analysis (TGA) instruments. The goal is to determine a simple and robust kinetic model for a given solid with the minimum of TGA experiments. From this last point of view, this work can be seen as an attempt to find the optimal design of TGA experiments for kinetic modelling. Two computation tools were developed. The first is a nonlinear parameter estimation procedure for...
Optimal upper bounds for non-negative parameters
Tkachov, Fyodor V.
2009-01-01
Using the techniques of [arXiv:0911.4271], upper bounds for a given confidence level are modified in an optimal fashion to incorporate the a priori information that the parameter being estimated is non-negative. A paradox with different confidence intervals for the same confidence level is clarified. The "lossy compression" nature of the device of confidence intervals is discussed and a "lossless" option to present results is pointed out.
OPTIMIZATION OF PARAMETER FOR METAL MATRIX COMPOSITE IN WIRE EDM
Nagaraja, R.; K.Chandrasekaran; S.Shenbhgaraj
2015-01-01
The bronze alumina (Al2O3) alloy is an Metal Matrix Composite (MMC) of interest in several applications like bearing sleeve, piston and cylinder liners etc., The reinforcement used in this MMC makes it difficult to machine using traditional technique. Wire-Electric Discharge Machine (WEDM) seems to be a viable option to machine. This paper presents an investigation on the optimization of machining parameters in WEDM of bronze-alumina MMC. The main objective is to find the optimum ...
Limiting Behaviour in Parameter Optimal Iterative Learning Control
David H. Owens; Maria Tomas-Rodriguez; Jari J. Hat(o)nen
2006-01-01
This paper analyses the concept of a Limit Set in Parameter Optimal Iterative Learning Control (ILC). We investigate the existence of stable and unstable parts of Limit Set and demonstrates that they will often exist in practice.This is illustrated via a 2-dimensional example where the convergence of the learning algorithm is analyzed from the error's dynamic behaviour. These ideas are extended to the N-dimensional cases by analogy and example.
Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes (30 000 cubic mm,>20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically
PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION
S. Kalaivani
2012-07-01
Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.
Analysis of optimization parameters in chest radiographs procedures
The risks associated with ionizing radiation became evident soon after the discovery of the X radiation. Therefore, any medical practices that make use of any type of ionizing radiation should be subjected to the basic principles of radiological protection: justification, optimization of protection and application of dose limits. In diagnostic radiology, it means to seek the lowest dose reasonably practicable, without compromising the image quality. The purpose of this project was to evaluate optimization parameters, specifically image quality, exposure levels and radiographs rejection rates, in radiological chest examinations. The image quality evaluation was performed using two forms, one for adults and other for children, based on European standards. By the results, we can conclude that the evaluated sector is not in agreement to the principle of optimization and this reality is not different from most health institutions. The entrance surface air kerma (Ka,e) results were below the national diagnostic reference levels. However, the several image quality parameters showed insufficient ratings and the film rejection rates were high. The lack of optimization generates poor quality images, causing inaccurate diagnostic reports, and increasing operating costs. Therefore, the research warns of the urgency of implementing Quality Control Assurance Program in all radiology services in the country. (author)
OPTIMIZATION OF PARAMETER FOR METAL MATRIX COMPOSITE IN WIRE EDM
R.Nagaraja
2015-02-01
Full Text Available The bronze alumina (Al2O3 alloy is an Metal Matrix Composite (MMC of interest in several applications like bearing sleeve, piston and cylinder liners etc., The reinforcement used in this MMC makes it difficult to machine using traditional technique. Wire-Electric Discharge Machine (WEDM seems to be a viable option to machine. This paper presents an investigation on the optimization of machining parameters in WEDM of bronze-alumina MMC. The main objective is to find the optimum cutting parameters to achieve a low value of Surface roughness and high value of material removal rate (MRR. The cutting parameters considered in this experimental study are, pulse on time (Ton, pulse off time (Toff and wire feed rate. The settings of cutting parameters were determined by using Taguchi experimental design method. An L9 orthogonal array was chosen. Signal to Noise ratio (S/N and analysis of variance (ANOVA was used to analyze the effect of the parameters on surface roughness and to identify the optimum cutting parameters. The contribution of each cutting parameters towards the surface roughness and MRR is also identified. The study shows that the Taguchi method is suitable to solve the stated problem with minimum number of trails as compared with a full factorial design.
Optimizing casting parameters of steel ingot based on orthogonal method
张沛; 李学通; 臧新良; 杜凤山
2008-01-01
The influence and signification of casting parameters on the solidification process of steel ingot were discussed based on the finite element method (FEM) results by orthogonal experiment method. The range analysis, analysis of variance (ANOVA) and optimization project were used to investigate the FEM results. In order to decrease the ingot riser head and improve the utilization ratio of ingot, the casting parameters involved casting temperature, pouring velocity and interface heat transfer were optimized to decrease shrinkage pore and microporosity. The results show that the heat transfer coefficient between melt and heated board is a more sensitive factor. It is favor to decrease the shrinkage pore and microporosity under the conditions of low temperature, high pouring velocity and high heat transfer between melt and mold. If heat transfer in the ingot body is quicker than that in the riser, the position of shrinkage pore and microporosity will be closer to riser top. The results of optimization project show that few of shrinkage pore and microporosity reach into ingot body with the rational parameters, so the riser size can be reduced.
Damage localization using experimental modal parameters and topology optimization
Niemann, Hanno; Morlier, Joseph; Shahdin, Amir; Gourinat, Yves
2010-04-01
This work focuses on the development of a damage detection and localization tool using the topology optimization feature of MSC.Nastran. This approach is based on the correlation of a local stiffness loss and the change in modal parameters due to damages in structures. The loss in stiffness is accounted by the topology optimization approach for updating undamaged numerical models towards similar models with embedded damages. Hereby, only a mass penalization and the changes in experimentally obtained modal parameters are used as objectives. The theoretical background for the implementation of this method is derived and programmed in a Nastran input file and the general feasibility of the approach is validated numerically, as well as experimentally by updating a model of an experimentally tested composite laminate specimen. The damages have been introduced to the specimen by controlled low energy impacts and high quality vibration tests have been conducted on the specimen for different levels of damage. These supervised experiments allow to test the numerical diagnosis tool by comparing the result with both NDT technics and results of previous works (concerning shifts in modal parameters due to damage). Good results have finally been achieved for the localization of the damages by the topology optimization.
Optimization of laser butt welding parameters with multiple performance characteristics
Sathiya, P.; Abdul Jaleel, M. Y.; Katherasan, D.; Shanmugarajan, B.
2011-04-01
This paper presents a study carried out on 3.5 kW cooled slab laser welding of 904 L super austenitic stainless steel. The joints have butts welded with different shielding gases, namely argon, helium and nitrogen, at a constant flow rate. Super austenitic stainless steel (SASS) normally contains high amount of Mo, Cr, Ni, N and Mn. The mechanical properties are controlled to obtain good welded joints. The quality of the joint is evaluated by studying the features of weld bead geometry, such as bead width (BW) and depth of penetration (DOP). In this paper, the tensile strength and bead profiles (BW and DOP) of laser welded butt joints made of AISI 904 L SASS are investigated. The Taguchi approach is used as a statistical design of experiment (DOE) technique for optimizing the selected welding parameters. Grey relational analysis and the desirability approach are applied to optimize the input parameters by considering multiple output variables simultaneously. Confirmation experiments have also been conducted for both of the analyses to validate the optimized parameters.
Optimizing spectral CT parameters for material classification tasks
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimal construction parameters of electrosprayed trilayer organic photovoltaic devices
A detailed investigation of the optimal set of parameters employed in multilayer device fabrication obtained through successive electrospray deposited layers is reported. In this scheme, the donor/acceptor (D/A) bulk heterojunction layer is sandwiched between two thin stacked layers of individual donor and acceptor materials. The stacked layers geometry with optimal thicknesses plays a decisive role in improving operation characteristics. Among the parameters of the multilayer organic photovoltaics device, the D/A concentration ratio, blend thickness and stacking layers thicknesses are optimized. Other parameters, such as thermal annealing and the role of top metal contacts, are also discussed. Internal photon to current efficiency is found to attain a strong response in the 500 nm optical region for the most efficient device architectures. Such an observation indicates a clear interplay between photon harvesting of active layers and transport by ancillary stacking layers, opening up the possibility to engineer both the material fine structure and the device architecture to obtain the best photovoltaic response from a complex organic heterostructure. (paper)
Mathematical Modelling and Parameter Optimization of Pulsating Heat Pipes
Yang, Xin-She; Luan, Tao; Koziel, Slawomir
2014-01-01
Proper heat transfer management is important to key electronic components in microelectronic applications. Pulsating heat pipes (PHP) can be an efficient solution to such heat transfer problems. However, mathematical modelling of a PHP system is still very challenging, due to the complexity and multiphysics nature of the system. In this work, we present a simplified, two-phase heat transfer model, and our analysis shows that it can make good predictions about startup characteristics. Furthermore, by considering parameter estimation as a nonlinear constrained optimization problem, we have used the firefly algorithm to find parameter estimates efficiently. We have also demonstrated that it is possible to obtain good estimates of key parameters using very limited experimental data.
Optimization of Neutrino Oscillation Parameters Using Differential Evolution
Ghulam Mustafa; Faisal Akram; Bilal Masud
2013-01-01
We show how the traditional grid based method for finding neutrino oscillation parameters △m2 and tan2θ can be combined with an optimization technique,Differential Evolution (DE),to get a significant decrease in computer processing time required to obtain minimal chi-square (x2) in four different regions of the parameter space.We demonstrate efficiency for the two-neutrinos case.For this,the x2 function for neutrino oscillations is evaluated for grids with different density of points in standard allowed regions of the parameter space of △m2 and tan2 θ using experimental and theoretical total event rates of chlorine (Homestake),Gallex+GNO,SAGE,Superkamiokande,and SNO detectors.We find that using DE in combination with the grid based method with small density of points can produce the results comparable with the one obtained using high density grid,in much lesser computation time.
Yelk, Joseph; Sukharev, Maxim; Seideman, Tamar
2008-01-01
An optimal control approach based on multiple parameter genetic algorithms is applied to the design of plasmonic nanoconstructs with pre-determined optical properties and functionalities. We first develop nanoscale metallic lenses that focus an incident plane wave onto a pre-specified, spatially confined spot. Our results illustrate the role of symmetry breaking and unravel the principles that favor dimeric constructs for optimal light localization. Next we design a periodic array of silver p...
Dose-painting IMRT optimization using biological parameters
Kim, Yusung (Dept. of Radiation Oncology, Univ. of Iowa, Iowa City (United States)); Tome, Wolfgang A. (Dept. of Human Oncology Univ. of Wisconsin, Madison (United States)), E-mail: tome@humonc.wisc.edu
2010-11-15
Purpose. Our work on dose-painting based on the possible risk characteristics for local recurrence in tumor subvolumes and the optimization of treatment plans using biological objective functions that are region-specific are reviewed. Materials and methods. A series of intensity modulated dose-painting techniques are compared to their corresponding intensity modulated plans in which the entire PTV is treated to a single dose level, delivering the same equivalent uniform dose (EUD) to the entire PTV. Iso-TCP and iso-NTCP maps are introduced as a tool to aid the planner in the evaluation of the resulting non-uniform dose distributions. Iso-TCP and iso-NTCP maps are akin to iso-dose maps in 3D conformal radiotherapy. The impact of the currently limited diagnostic accuracy of functional imaging on a series of dose-painting techniques is also discussed. Results. Utilizing biological parameters (risk-adaptive optimization) in the generation of dose-painting plans results in an increase in the therapeutic ratio as compared to conventional dose-painting plans in which optimization techniques based on physical dose are employed. Conclusion. Dose-painting employing biological parameters appears to be a promising approach for individualized patient- and disease-specific radiotherapy
Dose-painting IMRT optimization using biological parameters
Purpose. Our work on dose-painting based on the possible risk characteristics for local recurrence in tumor subvolumes and the optimization of treatment plans using biological objective functions that are region-specific are reviewed. Materials and methods. A series of intensity modulated dose-painting techniques are compared to their corresponding intensity modulated plans in which the entire PTV is treated to a single dose level, delivering the same equivalent uniform dose (EUD) to the entire PTV. Iso-TCP and iso-NTCP maps are introduced as a tool to aid the planner in the evaluation of the resulting non-uniform dose distributions. Iso-TCP and iso-NTCP maps are akin to iso-dose maps in 3D conformal radiotherapy. The impact of the currently limited diagnostic accuracy of functional imaging on a series of dose-painting techniques is also discussed. Results. Utilizing biological parameters (risk-adaptive optimization) in the generation of dose-painting plans results in an increase in the therapeutic ratio as compared to conventional dose-painting plans in which optimization techniques based on physical dose are employed. Conclusion. Dose-painting employing biological parameters appears to be a promising approach for individualized patient- and disease-specific radiotherapy
Shen, Meie; Chen, Wei-Neng; Zhang, Jun; Chung, Henry Shu-Hung; Kaynak, Okyay
2013-04-01
The optimal selection of parameters for time-delay embedding is crucial to the analysis and the forecasting of chaotic time series. Although various parameter selection techniques have been developed for conventional uniform embedding methods, the study of parameter selection for nonuniform embedding is progressed at a slow pace. In nonuniform embedding, which enables different dimensions to have different time delays, the selection of time delays for different dimensions presents a difficult optimization problem with combinatorial explosion. To solve this problem efficiently, this paper proposes an ant colony optimization (ACO) approach. Taking advantage of the characteristic of incremental solution construction of the ACO, the proposed ACO for nonuniform embedding (ACO-NE) divides the solution construction procedure into two phases, i.e., selection of embedding dimension and selection of time delays. In this way, both the embedding dimension and the time delays can be optimized, along with the search process of the algorithm. To accelerate search speed, we extract useful information from the original time series to define heuristics to guide the search direction of ants. Three geometry- or model-based criteria are used to test the performance of the algorithm. The optimal embeddings found by the algorithm are also applied in time-series forecasting. Experimental results show that the ACO-NE is able to yield good embedding solutions from both the viewpoints of optimization performance and prediction accuracy. PMID:23144038
A Novel Optimization Tool for Automated Design of Integrated Circuits based on MOSGA
Maryam Dehbashian
2011-11-01
Full Text Available In this paper a novel optimization method based on Multi-Objective Gravitational Search Algorithm (MOGSA is presented for automated design of analog integrated circuits. The recommended method firstly simulates a selected circuit using a simulator and then simulated results are optimized by MOGSA algorithm. Finally this process continues to meet its optimum result. The main programs of the proposed method have been implemented in MATLAB while analog circuits are simulated by HSPICE software. To show the capability of this method, its proficiency will be examined in the optimization of analog integrated circuits design. In this paper, an analog circuit sizing scheme -Optimum Automated Design of a Temperature independent Differential Op-amp using Widlar Current Source- is illustrated as a case study. The computer results obtained from implementing this method indicate that the design specifications are closely met. Moreover, according to various design criteria, this tool by proposing a varied set of answers can give more options to designers to choose a desirable scheme among other suggested results. MOGSA, the proposal algorithm, introduces a novel method in multi objective optimization on the basis of Gravitational Search Algorithm in which the concept of “Pareto-optimality” is used to determine “non-dominated” positions as well as an external repository to keep these positions. To ensure the accuracy of MOGSA performance, this algorithm is validated using several standard test functions from some specialized literatures. Final results indicate that our method is highly competitive with current multi objective optimization algorithms.
Process Parameters Optimization in Single Point Incremental Forming
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
Optimization of surface roughness parameters in dry turning
R.A. Mahdavinejad
2009-12-01
Full Text Available Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view, the combination of adaptive neural fuzzy intelligent system is used to predict the roughness of dried surface machined in turning process.Research limitations/implications: There are some limitations in the properties of various kinds of lubricants. The influence of some undesirable factors in experiments is Another limitation in this research.Practical implications: From this point of view, some samples are machined with various input parameters and then the experimental data is used to create fuzzy rules and their processing via neural networks. So that, the prediction model is created with some experimental data first. Then the results of this model are compared with the real surface roughness.Originality/value: When the cutting speed is increased the machined surface quality is improved.The quality of machined surface is decreased with the feeding rates and the depth of cut.The error of the model is more less than the error of using ordinary equations. The comparison results show that this model is more effective than theoretical calculation methods.
Optimizing experimental parameters for tracking of diffusing particles
Vestergaard, Christian L.
2016-08-01
We describe how a single-particle tracking experiment should be designed in order for its recorded trajectories to contain the most information about a tracked particle's diffusion coefficient. The precision of estimators for the diffusion coefficient is affected by motion blur, limited photon statistics, and the length of recorded time series. We demonstrate for a particle undergoing free diffusion that precision is negligibly affected by motion blur in typical experiments, while optimizing photon counts and the number of recorded frames is the key to precision. Building on these results, we describe for a wide range of experimental scenarios how to choose experimental parameters in order to optimize the precision. Generally, one should choose quantity over quality: experiments should be designed to maximize the number of frames recorded in a time series, even if this means lower information content in individual frames.
Input Parameters Optimization in Swarm DS-CDMA Multiuser Detectors
Abrão, Taufik; Angelico, Bruno A; Jeszensky, Paul Jean E
2010-01-01
In this paper, the uplink direct sequence code division multiple access (DS-CDMA) multiuser detection problem (MuD) is studied into heuristic perspective, named particle swarm optimization (PSO). Regarding different system improvements for future technologies, such as high-order modulation and diversity exploitation, a complete parameter optimization procedure for the PSO applied to MuD problem is provided, which represents the major contribution of this paper. Furthermore, the performance of the PSO-MuD is briefly analyzed via Monte-Carlo simulations. Simulation results show that, after convergence, the performance reached by the PSO-MuD is much better than the conventional detector, and somewhat close to the single user bound (SuB). Rayleigh flat channel is initially considered, but the results are further extend to diversity (time and spatial) channels.
Total energy control system autopilot design with constrained parameter optimization
Ly, Uy-Loi; Voth, Christopher
1990-01-01
A description is given of the application of a multivariable control design method (SANDY) based on constrained parameter optimization to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the direct synthesis of a multiloop AFCS inner-loop feedback control system based on total energy control system (TECS) principles. The design procedure offers a structured approach for the determination of a set of stabilizing controller design gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The approach can be extended to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by proper formulation of the design objectives and constraints. Satisfactory designs are usually obtained in few iterations. Performance characteristics of the optimized TECS design have been improved, particularly in the areas of closed-loop damping and control activity in the presence of turbulence.
Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem
Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting
Standardless quantification by parameter optimization in electron probe microanalysis
Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)
2012-11-15
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.
Optimization of Parameters for Melt Crystallization of p-Cresolt
丛山; 李鑫钢; 邬俊; 许长春
2012-01-01
Laboratory-scale experiments were carried out to evaluate the influences of operational parameters on the melt crystallization efficiency for p-cresol purification. The optimal c.rystallization conditions were determined： dynamic pulsed aeration at 90 L·h-1 and the cooling rate of 0.6 0.8 ℃min^-1, followed by sweating at 0.2-0.3 ℃.min^-1 for 40 min. Results also demonstrate that the melt crystallization efficiency is sensitive to feed concentration, which highlights this technology for separation and purification of high purity products.
Rapoport, Daniel H; Becker, Tim; Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli
2011-01-01
Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high
Daniel H Rapoport
Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters
Parameter optimization in AQM controller design to support TCP traffic
Yang, Wei; Yang, Oliver W.
2004-09-01
TCP congestion control mechanism has been widely investigated and deployed on Internet in preventing congestion collapse. We would like to employ modern control theory to specify quantitatively the control performance of the TCP communication system. In this paper, we make use of a commonly used performance index called the Integral of the Square of the Error (ISE), which is a quantitative measure to gauge the performance of a control system. By applying the ISE performance index into the Proportional-plus-Integral controller based on Pole Placement (PI_PP controller) for active queue management (AQM) in IP routers, we can further tune the parameters for the controller to achieve an optimum control minimizing control errors. We have analyzed the dynamic model of the TCP congestion control under this ISE, and used OPNET simulation tool to verify the derived optimized parameters of the controllers.
Optimal VLF Parameters for Pitch Angle Scattering of Trapped Electrons
Albert, J. M.; Inan, U. S.
2001-12-01
VLF waves are known to determine the lifetimes of energetic radiation belt electrons in the inner radiation belt and slot regions. Artificial injection of such waves from ground- or space-based transmitters may thus be used to affect the trapped electron population. In this paper, we seek to determine the optimal parameters (frequency and wave normal angle) of a quasi-monochromatic VLF wave using bounce-averaged quasi-linear theory. We consider the cumulative effects of all harmonic resonances and determine the diffusion rates of particles with selected energies on particular L-shells. We also compare the effects of the VLF wave to diffusion driven by other whistler-mode waves (plasmaspheric hiss, lightning, and VLF transmitters). With appropriate choice of the wave parameters, it may be possible to substantially reduce the lifetime of selected classes of particles.
Parameter optimization in molecular dynamics simulations using a genetic algorithm
In this work, we introduce a genetic algorithm for the parameterization of the reactive force field developed by Kieffer . This potential includes directional covalent bonds and dispersion terms. Important features of this force field for simulating systems that undergo significant structural reorganization are (i) the ability to account for the redistribution of electron density upon ionization, formation, or breaking of bonds, through a charge transfer term, and (ii) the fact that the angular constraints dynamically adjust when a change in the coordination number of an atom occurs. In this paper, we present the implementation of the genetic algorithm into the existing code as well as the algorithm efficiency and preliminary results on Si-Si force field optimization. The parameters obtained by this method will be compared to existing parameter sets obtained by a trial-and-error process.
Rhythm Suren Wadhwa
2011-11-01
Full Text Available The paper presents a comparison and application of metaheuristic population-based optimization algorithms to a flexible manufacturing automation scenario in a metacasting foundry. It presents a novel application and comparison of Bee Colony Algorithm (BCA with variations of Particle Swarm Optimization (PSO and Ant Colony Optimization (ACO for object recognition problem in a robot material handling system. To enable robust pick and place activity of metalcasted parts by a six axis industrial robot manipulator, it is important that the correct orientation of the parts is input to the manipulator, via the digital image captured by the vision system. This information is then used for orienting the robot gripper to grip the part from a moving conveyor belt. The objective is to find the reference templates on the manufactured parts from the target landscape picture which may contain noise. The Normalized cross-correlation (NCC function is used as an objection function in the optimization procedure. The ultimate goal is to test improved algorithms that could prove useful in practical manufacturing automation scenarios.
Ruan, Cong; Sun, Xiao-Min; Song, Yi-Xu
In this paper, we propose a method to optimize etching yield parameters. By means of defining a fitness function between the actual etching profile and the simulation profile, the etching yield parameters solving problem is transformed into an optimization problem. The problem is nonlinear and high dimensional, and each simulation is computationally expensive. To solve this problem, we need to search a better solution in a multidimensional space. Ordinal optimization and tabu search hybrid algorithm is introduced to solve this complex problem. This method ensures getting good enough solution in an acceptable time. The experimental results illustrate that simulation profile obtained by this method is very similar with the actual etching profile in surface topography. It also proves that our proposed method has feasibility and validity.
Optimizing the Pulsed Current Gas Tungsten Arc Welding Parameters
M. Balasubramanian; V. Jayabalan; V. Balasubramanian
2006-01-01
The selection of process parameter in the gas tungsten arc (GTA) welding of titanium alloy was presented for obtaining optimum grain size and hardness. Titanium alloy (Ti-6Al-4V) is one of the most important non-ferrous metals which offers great potential application in aerospace, biomedical and chemical industries,because of its low density (4.5 g/cm3), excellent corrosion resistance, high strength, attractive fracture behaviour and high melting point (1678℃). The preferred welding process for titanium alloy is frequent GTA welding due to its comparatively easier applicability and better economy. In the case of single pass (GTA)welding of thinner section of this alloy, the pulsed current has been found beneficial due to its advantages over the conventional continuous current process. Many considerations come into the picture and one needs to carefully balance various pulse current parameters to reach an optimum combination. Four factors, five level, central composite, rotatable design matrix were used to optimize the required number of experimental conditions. Mathematical models were developed to predict the fusion zone grain size using analysis of variance (ANOVA) and regression analysis. The developed models were optimized using the traditional Hooke and Jeeve's algorithm. Experimental results were provided to illustrate the proposed approach.
Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.
Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin
2016-01-01
Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates. PMID:26970800
Process parameter optimization for fly ash brick by Taguchi method
Prabir Kumar Chaulia
2008-06-01
Full Text Available This paper presents the results of an experimental investigation carried out to optimize the mix proportions of the fly ash brick by Taguchi method of parameter design. The experiments have been designed using an L9 orthogonal array with four factors and three levels each. Small quantity of cement has been mixed as binding materials. Both cement and the fly ash used are indicated as binding material and water binder ratio has been considered as one of the control factors. So the effects of water/binder ratio, fly ash, coarse sand, and stone dust on the performance characteristic are analyzed using signal-to-noise ratios and mean response data. According to the results, water/binder ratio and stone dust play the significant role on the compressive strength of the brick. Furthermore, the estimated optimum values of the process parameters are corresponding to water/binder ratio of 0.4, fly ash of 39%, coarse sand of 24%, and stone dust of 30%. The mean value of optimal strength is predicted as 166.22 kg.cm-2 with a tolerance of ± 10.97 kg.cm-2. Confirmatory experimental result obtained for the optimum conditions is 160.17 kg.cm-2.
Automated scheme to determine design parameters for a recoverable reentry vehicle
The NRV (Nosetip Recovery Vehicle) program at Sandia Laboratories is designed to recover the nose section from a sphere cone reentry vehicle after it has flown a near ICBM reentry trajectory. Both mass jettison and parachutes are used to reduce the velocity of the RV near the end of the trajectory to a sufficiently low level that the vehicle may land intact. The design problem of determining mass jettison time and parachute deployment time in order to ensure that the vehicle does land intact is considered. The problem is formulated as a min-max optimization problem where the design parameters are to be selected to minimize the maximum possible deviation in the design criteria due to uncertainties in the system. The results of the study indicate that the optimal choice of the design parameters ensures that the maximum deviation in the design criteria is within acceptable bounds. This analytically ensures the feasibility of recovery for NRV
Automated Software Testing Using Metahurestic Technique Based on An Ant Colony Optimization
Srivastava, Praveen Ranjan
2011-01-01
Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that's why there is a need to automate the software testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based type of software systems. Aim of the current paper is to present an algorithm by applying an ant colony optimization technique, for generation of optimal and minimal test sequences for behavior specification of software. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing
Acoustical characterization and parameter optimization of polymeric noise control materials
Homsi, Emile N.
2003-10-01
The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.
Automated Tuning for Parameter Identification in Multi-Scale Coronary Simulations
Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison
2015-11-01
Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in disease research. In this study, patient-specific geometries are constructed and combined with finite element flow simulations using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of hemodynamic behavior, can be used as coupled boundary conditions for the flow solver. The parameters of the LPN are tuned so the outputs match a patient's clinical data. However, the parameters are usually manually tuned, which is time consuming and does not account for uncertainty in the measurements. We thus propose a Bayesian approach to parameter tuning that provides optimal parameter statistics through sampling from their posterior distribution and is particularly well suited for models characterized by a large number of parameters and scarce data. We also show that analysis of the local and global identifiability play an important role for dimensionality reduction in the estimation. We present the results of applying the proposed approach to a cohort of patients, and demonstrate the ability to match high priority targets. After identifying the LPN parameters for each patient, we demonstrate their use in 3D simulations.
Parameter Estimation of Induction Motors Using Water Cycle Optimization
M. Yazdani-Asrami
2013-12-01
Full Text Available This paper presents the application of recently introduced water cycle algorithm (WCA to optimize the parameters of exact and approximate induction motor from the nameplate data. Considering that induction motors are widely used in industrial applications, these parameters have a significant effect on the accuracy and efficiency of the motors and, ultimately, the overall system performance. Therefore, it is essential to develop algorithms for the parameter estimation of the induction motor. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams ﬂow to the sea in the real world. The objective function is defined as the minimization of the real values of the relative error between the measured and estimated torques of the machine in different slip points. The proposed WCA approach has been applied on two different sample motors. Results of the proposed method have been compared with other previously applied Meta heuristic methods on the problem, which show the feasibility and the fast convergence of the proposed approach.
GAUFRE: a tool for an automated determination of atmospheric parameters from spectroscopy
Valentini, Marica; Miglio, Andrea; Fossati, Luca; Munari, Ulisse
2013-01-01
We present an automated tool for measuring atmospheric parameters (T_eff, log(g), [Fe/H]) for F-G-K dwarf and giant stars. The tool, called GAUFRE, is written in C++ and composed of several routines: GAUFRE-RV measures radial velocity from spectra via cross-correlation against a synthetic template, GAUFRE-EW measures atmospheric parameters through the classic line-by-line technique and GAUFRE-CHI2 performs a chi^2 fitting to a library of synthetic spectra. A set of F-G-K stars extensively studied in the literature were used as a benchmark for the program: their high signal-to-noise and high resolution spectra were analysed by using GAUFRE and results were compared with those present in literature. The tool is also implemented in order to perform the spectral analysis after fixing the surface gravity (log(g)) to the accurate value provided by asteroseismology. A set of CoRoT stars, belonging to LRc01 and LRa01 fields was used for first testing the performances and the behaviour of the program when using the se...
Optimization-based particle filter for state and parameter estimation
Li Fu; Qi Fei; Shi Guangming; Zhang Li
2009-01-01
In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.
Optimal z-axis scanning parameters for gynecologic cytology specimens
Amber D Donnelly
2013-01-01
Full Text Available Background: The use of virtual microscopy (VM in clinical cytology has been limited due to the inability to focus through three dimensional (3D cell clusters with a single focal plane (2D images. Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN specimens prepared on SurePath™ glass slides while maintaining a manageable file size. Subjects and Methods: The iScanCoreo Au scanner (Ventana, AZ, USA was used to digitize 192 SurePath™ glass slides at three focal plane levels at 1 μ interval. The digitized virtual images (VI were annotated using BioImagene′s Image Viewer. Five participants interpreted the VI and recorded the focal plane level at which they felt confident and later interpreted the corresponding glass slide specimens using light microscopy (LM. The participants completed a survey about their experiences. Inter-rater agreement and concordance between the VI and the glass slide specimens were evaluated. Results: This study determined an overall high intra-rater diagnostic concordance between glass and VI (89-97%, however, the inter-rater agreement for all cases was higher for LM (94% compared with VM (82%. Survey results indicate participants found low grade dysplasia and koilocytes easy to diagnose using three focal plane levels, the image enhancement tool was useful and focusing through the cells helped with interpretation; however, the participants found VI with hyperchromatic crowded groups challenging to interpret. Participants reported they prefer using LM over VM. This study supports using three focal plane levels and 1 μ interval to expand the use of VM in GYN cytology. Conclusion: Future improvements in technology and appropriate training should make this format a more preferable and practical option in clinical cytology.
Optimization of system parameters for a complete multispectral polarimeter
We optimize a general class of complete multispectral polarimeters with respect to signal-to-noise ratio, stability against alignment errors, and the minimization of errors regarding a given set of polarization states. The class of polarimeters that are dealt with consists of at least four polarization optics each with a multispectral detector. A polarization optic is made of an azimuthal oriented wave plate and a polarizing filter. A general, but not unique, analytic solution that minimizes signal-to-noise ratio is introduced for a polarimeter that incorporates four simultaneous measurements with four independent optics. The optics consist of four sufficient wave plates, where at least one is a quarter-wave plate. The solution is stable with respect to the retardance of the quarter-wave plate; therefore, it can be applied to real-world cases where the retardance deviates from λ/4. The solution is a set of seven rotational parameters that depends on the given retardances of the wave plates. It can be applied to a broad range of real world cases. A numerical method for the optimization of arbitrary polarimeters of the type discussed is also presented and applied for two cases. First, the class of polarimeters that were analytically dealt with are further optimized with respect to stability and error performance with respect to linear polarized states. Then a multispectral case for a polarimeter that consists of four optics with real achromatic wave plates is presented. This case was used as the theoretical background for the development of the Airborne Multi-Spectral Sunphoto- and Polarimeter (AMSSP), which is an instrument for the German research aircraft HALO.
Parameter optimization in differential geometry based solvation models.
Wang, Bao; Wei, G W
2015-10-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304
Robust integrated autopilot/autothrottle design using constrained parameter optimization
Ly, Uy-Loi; Voth, Christopher; Sanjay, Swamy
1990-01-01
A multivariable control design method based on constrained parameter optimization was applied to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the following: (1) direct synthesis of a multivariable 'inner-loop' feedback control system based on total energy control principles; (2) synthesis of speed/altitude-hold designs as 'outer-loop' feedback/feedforward control systems around the above inner loop; and (3) direct synthesis of a combined 'inner-loop' and 'outer-loop' multivariable control system. The design procedure offers a direct and structured approach for the determination of a set of controller gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The presented approach may be applied to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by this method following careful problem formulation of the design objectives and constraints. Performance characteristics of the optimization design were improved over the current autopilot design on the B737-100 Transport Research Vehicle (TSRV) at the landing approach and cruise flight conditions; particularly in the areas of closed-loop damping, command responses, and control activity in the presence of turbulence.
High Temperature Epoxy Foam: Optimization of Process Parameters
Samira El Gazzani
2016-06-01
Full Text Available For many years, reduction of fuel consumption has been a major aim in terms of both costs and environmental concerns. One option is to reduce the weight of fuel consumers. For this purpose, the use of a lightweight material based on rigid foams is a relevant choice. This paper deals with a new high temperature epoxy expanded material as substitution of phenolic resin, classified as potentially mutagenic by European directive Reach. The optimization of thermoset foam depends on two major parameters, the reticulation process and the expansion of the foaming agent. Controlling these two phenomena can lead to a fully expanded and cured material. The rheological behavior of epoxy resin is studied and gel time is determined at various temperatures. The expansion of foaming agent is investigated by thermomechanical analysis. Results are correlated and compared with samples foamed in the same temperature conditions. The ideal foaming/gelation temperature is then determined. The second part of this research concerns the optimization of curing cycle of a high temperature trifunctional epoxy resin. A two-step curing cycle was defined by considering the influence of different curing schedules on the glass transition temperature of the material. The final foamed material has a glass transition temperature of 270 °C.
Optimizing resistance spot welding parameters for vibration damping steel sheets
Oberle, H. [Centre de Recherches et Developpements Metallurgiques, Sollac (France); Commaret, C.; Minier, C. [Automobiles Citroeen PSA (France); Magnaud, R. [Direction des Methodes Carrosserie, Renault (France); Pradere, G. [Materials Engineering Dept., Renault (France)
1998-01-01
In order to meet the growing demand for functionality and comfort in vehicles, weight and quietness are major concerns for carmakers and materials suppliers. Noise reduction by damping vibrations can meet both aspects. Therefore, steelmakers have developed vibration damping steel sheets (VDSS), which are a three-layer composite material composed of two steel sheets sandwiching a viscoelastic resin core. Industrial use of VDSS in automobiles usually implies the product can be resistance welded. The intent of this investigation is to set up rules to optimize resistance spot welding of VDSS. Two phenomena are the focus of this research: the reduction of blistering and gas expulsion holes. Different aspects are studied, such as the effect of polymer presence and of electrode shape on welding domain and the evaluation of the influence of a welding schedule on blistering and expulsion holes. It appears that polymer presence has no effect on domain width, but does on its position. Higher frequency of expulsion holes with truncated electrodes can be explained with mechanical considerations. From the influence of short circuit voltage, current delay angle and welding schedule on the frequency of gas expulsion holes, a mechanism responsible for expulsion holes is proposed and optimal welding parameters are given.
Gao, Hao
2016-04-01
For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. PMID:26987680
Gao, Hao
2016-04-01
For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT.
Optimized drying parameters of water hyacinths (Eichhornia crassipes. L
Edgardo V. Casas
2012-12-01
Full Text Available The study investigated the optimum drying conditions of water hyacinth to contribute in the improvement of present drying processes. The effects of independent parameters (drying temperature, airflow rate, and number of passes on the responses were determined using the Response Surface Methodology. The response parameters were composed of (1 final moisture content, (2 moisture ratio, (3 drying rate,(4 tensile strength, and (5 browning index. Box and Behnken experimental design represented the design of experiments that resulted in 15 drying runs. Statistical analysis evaluated the treatment effects. Drying temperature significantly affected the drying rate, moisture ratio, and browning index. Airflow rate had a significant effect only on the drying rate, while the number of passes significantly affected both the drying rate and browning index. The optimized conditions for drying the water hyacinth were at drying temperature of 90C, airflow rate of 0.044m3/s, and number of passes equivalent to five. The best modelthat characterizes the drying of water hyacinth is a rational function expressed as:
Automated system for calibration and control of the CHSPP-800 multichannel γ detector parameters
An automated system for adjustment, calibration and control of total absorption Cherenkov spectrometer is described. The system comprises a mechanical platform, capable of moving in two mutually perpendicular directions; movement detectors and limit switches; power unit, automation unit with remote control board. The automated system can operate both in manual control regime with coordinate control by a digital indicator, and in operation regime with computer according to special programs. The platform mounting accuracy is ± 0.1 mm. Application of the automated system has increased the rate of the course of the counter adjustment works 3-5 times
Optimal Control and Coordination of Connected and Automated Vehicles at Urban Traffic Intersections
Zhang, Yue J. [Boston University; Malikopoulos, Andreas [ORNL; Cassandras, Christos G. [Boston University
2016-01-01
We address the problem of coordinating online a continuous flow of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area. We present a decentralized optimal control framework whose solution yields for each vehicle the optimal acceleration/deceleration at any time in the sense of minimizing fuel consumption. The solu- tion, when it exists, allows the vehicles to cross the intersections without the use of traffic lights, without creating congestion on the connecting road, and under the hard safety constraint of collision avoidance. The effectiveness of the proposed solution is validated through simulation considering two intersections located in downtown Boston, and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time.
Suleimanov, Yury V
2015-01-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the possibility of discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.
Suleimanov, Yury V; Green, William H
2015-09-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920
Armand, J. P.
1972-01-01
An extension of classical methods of optimal control theory for systems described by ordinary differential equations to distributed-parameter systems described by partial differential equations is presented. An application is given involving the minimum-mass design of a simply-supported shear plate with a fixed fundamental frequency of vibration. An optimal plate thickness distribution in analytical form is found. The case of a minimum-mass design of an elastic sandwich plate whose fundamental frequency of free vibration is fixed. Under the most general conditions, the optimization problem reduces to the solution of two simultaneous partial differential equations involving the optimal thickness distribution and the modal displacement. One equation is the uniform energy distribution expression which was found by Ashley and McIntosh for the optimal design of one-dimensional structures with frequency constraints, and by Prager and Taylor for various design criteria in one and two dimensions. The second equation requires dynamic equilibrium at the preassigned vibration frequency.
Krenn, Julia; Mergili, Martin
2016-04-01
r.randomwalk is a GIS-based, multi-functional conceptual tool for mass movement routing. Starting from one to many release points or release areas, mass points are routed down through the digital elevation model until a defined break criterion is reached. Break criteria are defined by the user and may consist in an angle of reach or a related parameter (empirical-statistical relationships), in the drop of the flow velocity to zero (two-parameter friction model), or in the exceedance of a maximum runup height. Multiple break criteria may be combined. A constrained random walk approach is applied for the routing procedure, where the slope and the perpetuation of the flow direction determine the probability of the flow to move in a certain direction. r.randomwalk is implemented as a raster module of the GRASS GIS software and, as such, is open source. It can be obtained from http://www.mergili.at/randomwalk.html. Besides other innovative functionalities, r.randomwalk serves with built-in functionalities for the derivation of an impact indicator index (III) map with values in the range 0-1. III is derived from multiple model runs with different combinations of input parameters varied in a random or controlled way. It represents the fraction of model runs predicting an impact at a given pixel and is evaluated against the observed impact area through an ROC Plot. The related tool r.ranger facilitates the automated generation and evaluation of many III maps from a variety of sets of parameter combinations. We employ r.randomwalk and r.ranger for parameter optimization and sensitivity analysis. Thereby we do not focus on parameter values, but - accounting for the uncertainty inherent in all parameters - on parameter ranges. In this sense, we demonstrate two strategies for parameter sensitivity analysis and optimization. We avoid to (i) use one-at-a-time parameter testing which would fail to account for interdependencies of the parameters, and (ii) to explore all possible
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
Smith, Jo R; Smith, Katherine F; Brainard, Benjamin M
2014-09-01
The mean platelet component (MPC) is a proprietary algorithm of an automated laser-based hematology analyzer system which measures the refractive index of platelets. The MPC is related linearly to platelet density and is an indirect index of platelet activation status. Previous investigations of canine inflammatory conditions and models of endotoxemia demonstrated a significant decrease in the MPC, consistent with platelet activation. The purpose of this study was to evaluate the MPC and other platelet parameters in dogs with different diseases to determine if they could show differential platelet activation with different pathologies. The hypothesis was that the MPC would decrease in clinical conditions associated with systemic inflammation or platelet activation. Complete blood counts run on the analyzer from dogs with different inflammatory conditions (primary immune-mediated hemolytic anemia (IMHA) or thrombocytopenia (ITP), pituitary-dependent hyperadrenocorticism, intra-abdominal sepsis, pancreatitis, intravascular thrombus or thromboembolus and hemangiosarcoma) were reviewed retrospectively and compared with those of control dogs presenting for orthopedic evaluation. Dogs with ITP had a decreased plateletcrit and MPC, with an increased platelet volume and number of large platelets (P Dogs with IMHA had an increased plateletcrit and mass, and more numerous large platelets (P < 0.001).With the exception of the ITP group, there was no difference in MPC in the diseased groups when compared with the controls. The results of this study suggest the MPC does not change in certain canine diseases associated with systemic inflammation. PMID:25082397
Optimal Design of Variable Stiffness Composite Structures using Lamination Parameters
IJsselmuiden, S.T.
2011-01-01
Fiber reinforced composite materials have gained widespread acceptance for a multitude of applications in the aerospace, automotive, maritime and wind-energy industries. Automated fiber placement technologies have developed rapidly over the past two decades, driven primarily by a need to reduce m
Xing Wu
2011-07-01
Full Text Available This paper presents a multi-objective genetic algorithm (MOGA with Pareto optimality and elitist tactics for the control system design of automated guided vehicle (AGV. The MOGA is used to identify AGV driving system model and optimize its servo control system sequentially. In system identification, the model identified by least square method is adopted as an evolution tutor who selects the individuals having balanced performances in all objectives as elitists. In controller optimization, the velocity regulating capability required by AGV path tracking is employed as decision-making preferences which select Pareto optimal solutions as elitists. According to different objectives and elitist tactics, several sub-populations are constructed and they evolve concurrently by using independent reproduction, neighborhood mutation and heuristic crossover. The lossless finite precision method and the multi-objective normalized increment distance are proposed to keep the population diversity with a low computational complexity. Experiment results show that the cascaded MOGA have the capability to make the system model consistent with AGV driving system both in amplitude and phase, and to make its servo control system satisfy the requirements on dynamic performance and steady-state accuracy in AGV path tracking.
Markov V. N.
2014-11-01
Full Text Available The article considers discrete automatic machines with memory, designed for searching the values of parameters of the system optimizing a merit figure of the system, described by some criterion function. The problem of multiple parameter optimization is shown as a problem of discrete optimization by means of representation of values of parameters of the optimized system in the form of a set of discrete values with a specified step of digitization
Yelk, Joseph; Seideman, Tamar
2008-01-01
An optimal control approach based on multiple parameter genetic algorithms is applied to the design of plasmonic nanoconstructs with pre-determined optical properties and functionalities. We first develop nanoscale metallic lenses that focus an incident plane wave onto a pre-specified, spatially confined spot. Our results illustrate the role of symmetry breaking and unravel the principles that favor dimeric constructs for optimal light localization. Next we design a periodic array of silver particles to modify the polarization of an incident, linearly-polarized plane wave in a desired fashion while localizing the light in space. The results provide insight into the structural features that determine the birefringence properties of metal nanoparticles and their arrays. Of the variety of potential applications that may be envisioned, we note the design of nanoscale light sources with controllable coherence and polarization properties that could serve for coherent control of molecular or electronic dynamics in t...
Parameter estimation of nonlinear econometric models using particle swarm optimization
Mark P Wachowiak; Smolíková-Wachowiak, Renáta; Smolík, Dušan
2010-01-01
Global optimization is an essential component of econometric modeling. Optimization in econometrics is often difficult due to irregular cost functions characterized by multiple local optima. The goal of this paper is to apply a relatively new stochastic global technique, particle swarm optimization, to the well-known but difficult disequilibrium problem. Because of its co-operative nature and balance of local and global search, particle swarm is successful in optimizing the disequ...
Bogaarts, J G; Gommer, E D; Hilkman, D M W; van Kranen-Mastenbroek, V H J M; Reulen, J P H
2016-08-01
Automated seizure detection is a valuable asset to health professionals, which makes adequate treatment possible in order to minimize brain damage. Most research focuses on two separate aspects of automated seizure detection: EEG feature computation and classification methods. Little research has been published regarding optimal training dataset composition for patient-independent seizure detection. This paper evaluates the performance of classifiers trained on different datasets in order to determine the optimal dataset for use in classifier training for automated, age-independent, seizure detection. Three datasets are used to train a support vector machine (SVM) classifier: (1) EEG from neonatal patients, (2) EEG from adult patients and (3) EEG from both neonates and adults. To correct for baseline EEG feature differences among patients feature, normalization is essential. Usually dedicated detection systems are developed for either neonatal or adult patients. Normalization might allow for the development of a single seizure detection system for patients irrespective of their age. Two classifier versions are trained on all three datasets: one with feature normalization and one without. This gives us six different classifiers to evaluate using both the neonatal and adults test sets. As a performance measure, the area under the receiver operating characteristics curve (AUC) is used. With application of FBC, it resulted in performance values of 0.90 and 0.93 for neonatal and adult seizure detection, respectively. For neonatal seizure detection, the classifier trained on EEG from adult patients performed significantly worse compared to both the classifier trained on EEG data from neonatal patients and the classier trained on both neonatal and adult EEG data. For adult seizure detection, optimal performance was achieved by either the classifier trained on adult EEG data or the classifier trained on both neonatal and adult EEG data. Our results show that age
Blommaert, Maarten; Reiter, Detlev [Institute of Energy and Climate Research (IEK-4), FZ Juelich GmbH, D-52425 Juelich (Germany); Heumann, Holger [Centre de Recherche INRIA Sophia Antipolis, BP 93 06902 Sophia Antipolis (France); Baelmans, Martine [KU Leuven, Department of Mechanical Engineering, 3001 Leuven (Belgium); Gauger, Nicolas Ralph [TU Kaiserslautern, Chair for Scientific Computing, 67663 Kaiserslautern (Germany)
2015-05-01
At present, several plasma boundary codes exist that attempt to describe the complex interactions in the divertor SOL (Scrape-Off Layer). The predictive capability of these edge codes is still very limited. Yet, in parallel to major efforts to mature edge codes, we face the design challenges for next step fusion devices. One of them is the design of the helium and heat exhaust system. In past automated design studies, results indicated large potential reductions in peak heat load by an increased magnetic flux divergence towards the target structures. In the present study, a free boundary magnetic equilibrium solver is included into the simulation chain to verify these tendencies. Additionally, we expanded the applicability of the automated design method by introducing advanced ''adjoint'' sensitivity computations. This method, inherited from airfoil shape optimization in aerodynamics, allows for a large number of design variables at no additional computational cost. Results are shown for a design application of the new WEST divertor.
Choi, Kihwan; Ng, Alphonsus H C; Fobel, Ryan; Chang-Yen, David A; Yarnell, Lyle E; Pearson, Elroy L; Oleksak, Carl M; Fischer, Andrew T; Luoma, Robert P; Robinson, John M; Audet, Julie; Wheeler, Aaron R
2013-10-15
We introduce an automated digital microfluidic (DMF) platform capable of performing immunoassays from sample to analysis with minimal manual intervention. This platform features (a) a 90 Pogo pin interface for digital microfluidic control, (b) an integrated (and motorized) photomultiplier tube for chemiluminescent detection, and (c) a magnetic lens assembly which focuses magnetic fields into a narrow region on the surface of the DMF device, facilitating up to eight simultaneous digital microfluidic magnetic separations. The new platform was used to implement a three-level full factorial design of experiments (DOE) optimization for thyroid-stimulating hormone immunoassays, varying (1) the analyte concentration, (2) the sample incubation time, and (3) the sample volume, resulting in an optimized protocol that reduced the detection limit and sample incubation time by up to 5-fold and 2-fold, respectively, relative to those from previous work. To our knowledge, this is the first report of a DOE optimization for immunoassays in a microfluidic system of any format. We propose that this new platform paves the way for a benchtop tool that is useful for implementing immunoassays in near-patient settings, including community hospitals, physicians' offices, and small clinical laboratories. PMID:23978190
Automated procedure for selection of optimal refueling policies for light water reactors
An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function
Hu, Jie; Peng, Yinghong; Xiong, Guangleng
2007-01-01
Abstract This study presents a parameter coordination and robust optimization approach based on knowledge network modeling. The method allows multidisciplinary designer to synthetically coordinate and optimize parameter considering multidisciplinary knowledge. First, a knowledge network model is established, including design knowledge from assembly, manufacture, performance, and simulation. Second, the parameter coordination method is presented to solve the knowledge network model,...
Shu Elvis N; Nubila Imelda N; Ukaejiofo Ernest O; Nubila Thomas; Ike Samuel O; Ezema Ifeyinwa
2010-01-01
Abstract Background This study was designed to determine the correlation between heamatological parameters by Sysmex KX-21N automated hematology analyzer with the manual methods. Method Sixty (60) subjects were randomly selected from both apparently healthy subjects and those who have different blood disorders from the University of Teaching Hospital (UNTH), Ituku-Ozalla, Enugu, Enugu State, Nigeria. Three (3)mls of venous blood sample was collected aseptically from each subject into tri-pota...
Pappas, D.; Jeevarajan, A.; Anderson, M. M.
2004-01-01
Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments in microgravity. Measurement of cell culture medium allows for the optirn.jzation of culture conditions on orbit to maximize cell growth and minimize unnecessary exchange of medium. While several discrete sensors exist to measure culture health, a multi-parameter sensor would simplify the experimental apparatus. One such sensor, the Paratrend 7, consists of three optical fibers for measuring pH, dissolved oxygen (p02), dissolved carbon dioxide (pC02) , and a thermocouple to measure temperature. The sensor bundle was designed for intra-arterial placement in clinical patients, and potentially can be used in NASA's Space Shuttle and International Space Station biotechnology program bioreactors. Methods: A Paratrend 7 sensor was placed at the outlet of a rotating-wall perfused vessel bioreactor system inoculated with BHK-21 (baby hamster kidney) cells. Cell culture medium (GTSF-2, composed of 40% minimum essential medium, 60% L-15 Leibovitz medium) was manually measured using a bench top blood gas analyzer (BGA, Ciba-Corning). Results: A Paratrend 7 sensor was used over a long-term (>120 day) cell culture experiment. The sensor was able to track changes in cell medium pH, p02, and pC02 due to the consumption of nutrients by the BHK-21. When compared to manually obtained BGA measurements, the sensor had good agreement for pH, p02, and pC02 with bias [and precision] of 0.02 [0.15], 1 mm Hg [18 mm Hg], and -4.0 mm Hg [8.0 mm Hg] respectively. The Paratrend oxygen sensor was recalibrated (offset) periodically due to drift. The bias for the raw (no offset or recalibration) oxygen measurements was 42 mm Hg [38 mm Hg]. The measured response (rise) time of the sensor was 20 +/- 4s for pH, 81 +/- 53s for pC02, 51 +/- 20s for p02. For long-term cell culture measurements, these response times are more than adequate. Based on these findings , the Paratrend sensor could
Optimization of operational aircraft parameters reducing noise emission
Abdallah, Lina; Khardi, Salah; Haddou, Mounir
2010-01-01
The objective of this paper is to develop a model and a minimization method to provide flight path optimums reducing aircraft noise in the vicinity of airports. Optimization algorithm has solved a complex optimal control problem, and generates flight paths minimizing aircraft noise levels. Operational and safety constraints have been considered and their limits satisfied. Results are here presented and discussed.
Optimization of operational aircraft parameters Reducing Noise Emission
Abdallah, Lina; Khardi, Salah
2008-01-01
The objective of this paper is to develop a model and a minimization method to provide flight path optimums reducing aircraft noise in the vicinity of airports. Optimization algorithm has solved a complex optimal control problem, and generates flight paths minimizing aircraft noise levels. Operational and safety constraints have been considered and their limits satisfied. Results are here presented and discussed.
Optimization of the ARIES-CS compact stellarator reactor parameters
optimum reactor size are the minimum distance between coils, neutron and radiative power flux to the wall, and the beta limit. A reactor systems/optimization code is used to optimize the reactor parameters for minimum cost of electricity subject to a large number of physics, engineering, materials, and reactor component constraints. Different transport models, reactor component models, and costing algorithms are used to test sensitivities to different models and assumptions. A 1-D power balance code is used to study the path to ignition and the effect of different plasma and confinement assumptions including density and temperature profiles, impurity density levels and peaking near the outside, confinement scaling, beta limits, alpha particle losses, etc. for each plasma and coil configuration. Variations on two different magnetic configurations were analyzed in detail: a three-field-period (M = 3) NCSX-based plasma with coils modified to allow a larger plasma-coil spacing, and an M = 2 plasma with coils that are closer to the plasma on the outboard side with less toroidal excursion. The reactors have major radii R in the 7-9 m range with an improved blanket and shield concept and an advanced superconducting coil approach. The results show that compact stellarator reactors should be cost competitive with tokamak reactors. (author)
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
Optimization of non-linear mass damper parameters for transient response
Jensen, Jakob Søndergaard; Lazarov, Boyan Stefanov
2008-01-01
We optimize the parameters of multiple non-linear mass dampers based on numerical simulation of transient wave propagation through a linear mass-spring carrier structure. Topology optimization is used to obtain optimized distributions of damper mass ratio, natural frequency, damping ratio and non...... nonlinear stiffness coefficient. Large improvements in performance is obtained with optimized parameters and it is shown that nonlinearmass dampers can bemore effective for wave attenuation than linear mass dampers....
Characterization and optimized control by means of multi-parameter controllers
Nielsen, Carsten; Hoeg, S.; Thoegersen, A. (Dan-Ejendomme, Hellerup (Denmark)) (and others)
2009-07-01
Poorly functioning HVAC systems (Heating, Ventilation and Air Conditioning), but also separate heating, ventilation and air conditioning systems are costing the Danish society billions of kroner every year: partly because of increased energy consumption and high operational and maintenance costs, but mainly due to reduced productivity and absence due to illness because of a poor indoor climate. Typically, the operation of buildings and installations takes place today with traditional build-ing automation, which is characterised by 1) being based on static considerations 2) the individual sensor being coupled with one actuator/valve, i.e. the sensor's signal is only used in one place in the system 3) subsystems often being controlled independently of each other 4) the dynamics in building constructions and systems which is very important to the system and comfort regulation is not being considered. This, coupled with the widespread tendency to use large glass areas in the facades without sufficient sun shading, means that it is difficult to optimise comfort and energy consumption. Therefore, the last 10-20 years have seen a steady increase in the complaints of the indoor climate in Danish buildings and, at the same time, new buildings often turn out to be considerably higher energy consuming than expected. The purpose of the present project is to investigate the type of multi parameter sensors which may be generated for buildings and further to carry out a preliminary evaluation on how such multi parameter controllers may be utilized for optimal control of buildings. The aim of the project isn't to develop multi parameter controllers - this requires much more effort than possible in the present project. The aim is to show the potential of using multi parameter sensors when controlling buildings. For this purpose a larger office building has been chosen - an office building with at high energy demand and complaints regarding the indoor climate. In order to
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools
DEFINITION OF THE OPTIMAL PARAMETERS OF SUPERSONIC EXTRACTION AT STACHYS NODULES EXTRACTION
Zakharova, L.; Dyatlov, A.
2013-01-01
The researches allowed determination of the extractives amount in stachys nodules and motivation of optimal parameters of supersonic extraction at obtaining extracts on water basis from stachys nodules.
ZHANG Xu-ping; YU Yue-qing
2005-01-01
Optimization of structural parameters aimed at improving the load carrying capacity of spatial flexible redundant manipulators is presented in this paper. In order to increase the ratio of load to mass of robots, the cross-sectional parameters and constructional parameters are optimized respectively. The cross-sectional and configurational parameters are optimized simultaneously. The numerical simulation of a 4R spatial manipulator is performed. The results show that the load capacity of robots has been greatly improved through the optimization strategies proposed in this paper.
Parameter Studies, time-dependent simulations and design with automated Cartesian methods
Aftosmis, Michael
2005-01-01
Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.
Design And Modeling An Automated Digsilent Power System For Optimal New Load Locations
Mohamed Saad
2015-08-01
Full Text Available Abstract The electric power utilities seek to take advantage of novel approaches to meet growing energy demand. Utilities are under pressure to evolve their classical topologies to increase the usage of distributed generation. Currently the electrical power engineers in many regions of the world are implementing manual methods to measure power consumption for farther assessment of voltage violation. Such process proved to be time consuming costly and inaccurate. Also demand response is a grid management technique where retail or wholesale customers are requested either electronically or manually to reduce their load. Therefore this paper aims to design and model an automated power system for optimal new load locations using DPL DIgSILENT Programming Language. This study is a diagnostic approach that assists system operator about any voltage violation cases that would happen during adding new load to the grid. The process of identifying the optimal bus bar location involves a complicated calculation of the power consumptions at each load bus As a result the DPL program would consider all the IEEE 30 bus internal networks data then a load flow simulation will be executed. To add the new load to the first bus in the network. Therefore the developed model will simulate the new load at each available bus bar in the network and generate three analytical reports for each case that captures the overunder voltage and the loading elements among the grid.
Direct Multiple Shooting Optimization with Variable Problem Parameters
Whitley, Ryan J.; Ocampo, Cesar A.
2009-01-01
Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.
Parameter estimation for time-delay chaotic system by particle swarm optimization
The knowledge about time delays and parameters is very important for control and synchronization of time-delay chaotic system. In this paper, parameter estimation for time-delay chaotic system is given by treating the time delay as an additional parameter. The parameter estimation is converted to an optimization problem, which finds a best parameter combination such that an objective function is minimized. Particle swarm optimization (PSO) is used to optimize the objective function through particles' cooperation and evolution. Two illustrative examples are given to show the validity of the proposed method.
Optimizing Soil Hydraulic Parameters in RZWQM2 Under Fallow Conditions
Effective estimation of soil hydraulic parameters is essential for predicting soil water dynamics and related biochemical processes in agricultural systems. However, high uncertainties in estimated parameter values limit a model’s skill for prediction and application. In this study, a global search ...
Parameter optimization method for the water quality dynamic model based on data-driven theory.
Liang, Shuxiu; Han, Songlin; Sun, Zhaochen
2015-09-15
Parameter optimization is important for developing a water quality dynamic model. In this study, we applied data-driven method to select and optimize parameters for a complex three-dimensional water quality model. First, a data-driven model was developed to train the response relationship between phytoplankton and environmental factors based on the measured data. Second, an eight-variable water quality dynamic model was established and coupled to a physical model. Parameter sensitivity analysis was investigated by changing parameter values individually in an assigned range. The above results served as guidelines for the control parameter selection and the simulated result verification. Finally, using the data-driven model to approximate the computational water quality model, we employed the Particle Swarm Optimization (PSO) algorithm to optimize the control parameters. The optimization routines and results were analyzed and discussed based on the establishment of the water quality model in Xiangshan Bay (XSB). PMID:26277602
Optimal parameters for the FFA-Beddoes dynamic stall model
Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)
1999-03-01
Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Optimization of Russian roulette parameters for the KENO computer code
Hoffman, T.J.
1982-10-01
Proper specification of the (statistical) weight standards for Monte Carlo calculations can lead to a substantial reduction in computer time. Frequently these weights are set intuitively. When optimization is performed, it is usually based on a simplified model (to enable mathematical analysis) and involves minimization of the sample variance. In this report, weight standards are optimized through consideration of the actual implementation of Russian roulette in the KENO computer code. The goal is minimization of computer time rather than minimization of sample variance. Verification of the development and assumptions is obtained from Monte Carlo simulations. The results indicate that the current default weight standards are appropriate for most problems in which thermal neutron transport is not a major consumer of computer time. For thermal systems, the optimization technique described in this report should be used.
Parameter design and optimization of tight-lattice rod bundles
Thin rod bundles with tight lattice are arranged according to the equilateral triangle grid, as the proportion of fuel is large, and the power density of core is high. Based on the analysis of the performance of core, the ABV-6M reactor is taken as the example, and two objective functions, power density and flow rate of coolant are proposed for optimization calculation. Diameter and pitch of rod are optimized by using GA method respectively. The results, which are considered to be safety in security checking, show that tight lattice is effective for improving the power density and other performances of the reactor core. (author)
Borot de Battisti, M.; Maenhout, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; van Vulpen, M.; Moerland, M. A.
2015-10-01
Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm3to 23.3 cm3) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions.
Borot de Battisti, M; Maenhout, M; Denis de Senneville, B; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2015-10-01
Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm(3)to 23.3 cm(3)) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions. PMID:26378657
Individual Parameter Selection Strategy for Particle Swarm Optimization
Cai, Xingjuan; Cui, Zhihua; Zeng, Jianchao; Tan, Ying
2009-01-01
This chapter proposes a new model incorporated with the characteristic differences for each particle, and the individual selection strategy for inertia weight, cognitive learning factor and social learning factor are discussed, respectively. Simulation results show the individual selection strategy maintains a fast search speed and robust. Further research should be made on individual structure for particle swarm optimization.
Air Compressor Driving with Synchronous Motors at Optimal Parameters
Iuliu Petrica
2010-10-01
Full Text Available In this paper a method of optimal compensation of the reactive load by the synchronous motors, driving the air compressors, used in mining enterprises is presented, taking into account that in this case, the great majority of the equipment (compressors, pumps are generally working a constant load.
Problems of two-parametric optimization of single-beam gamma absorption concentration meters in the assigned measurement range are considered. It is shown that maximum absolute and relative statistical measurement errors are observed at the measurement range boundaries under any values of variable parameters. Optimization of single-beam gamma absorption concentration meter parameters for a number of binary solutions is performed
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)
2015-06-15
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
AN IMPROVED GENETIC ALGORITHM FOR SEARCHING OPTIMAL PARAMETERS IN n-DIMENSIONAL SPACE
Tang Bin; Hu Guangrui
2002-01-01
An improved genetic algorithm for searching optimal parameters in n-dimensional space is presented, which encodes movement direction and distance and searches from coarse to precise. The algorithm can realize global optimization and improve the search efficiency, and can be applied effectively in industrial optimization, data mining and pattern recognition.
Optimizing human-system interface automation design based on a skill-rule-knowledge framework
This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.
Problems of optimization of the accelerating system parameters of induction linear accelerators
Optimization of the accelerating system of an induction linear accelerator (ILAC) is discussed. For computerized optimization of ILAC parameters, analytical dependences are required which relate accelerator elements to its operating conditions. In deriving the objective function the degree of importance of optimized parameters is taken account of in the weight factors whose value can vary from 0 to 1. One minimum in the objective function permitted the employment of a simple algorithm for optimization - the gradient method. In optimization it is assumed that most common criteria for estimating the ILAC are the efficiency, the relative cost, and the specific energy capacity
Dorin Sendrescu
2013-01-01
This paper deals with the offline parameters identification for a class of wastewater treatment bioprocesses using particle swarm optimization (PSO) techniques. Particle swarm optimization is a relatively new heuristic method that has produced promising results for solving complex optimization problems. In this paper one uses some variants of the PSO algorithm for parameter estimation of an anaerobic wastewater treatment process that is a complex biotechnological system. The identification sc...
Martin, Thomas Joseph
This dissertation presents the theoretical methodology, organizational strategy, conceptual demonstration and validation of a fully automated computer program for the multi-disciplinary analysis, inverse design and optimization of convectively cooled axial gas turbine blades and vanes. Parametric computer models of the three-dimensional cooled turbine blades and vanes were developed, including the automatic generation of discretized computational grids. Several new analysis programs were written and incorporated with existing computational tools to provide computer models of the engine cycle, aero-thermodynamics, heat conduction and thermofluid physics of the internally cooled turbine blades and vanes. A generalized information transfer protocol was developed to provide the automatic mapping of geometric and boundary condition data between the parametric design tool and the numerical analysis programs. A constrained hybrid optimization algorithm controlled the overall operation of the system and guided the multi-disciplinary internal turbine cooling design process towards the objectives and constraints of engine cycle performance, aerodynamic efficiency, cooling effectiveness and turbine blade and vane durability. Several boundary element computer programs were written to solve the steady-state non-linear heat conduction equation inside the internally cooled and thermal barrier-coated turbine blades and vanes. The boundary element method (BEM) did not require grid generation inside the internally cooled turbine blades and vanes, so the parametric model was very robust. Implicit differentiations of the BEM thermal and thereto-elastic analyses were done to compute design sensitivity derivatives faster and more accurately than via explicit finite differencing. A factor of three savings of computer processing time was realized for two-dimensional thermal optimization problems, and a factor of twenty was obtained for three-dimensional thermal optimization problems
Application of optimal input synthesis to aircraft parameter identification
Gupta, N. K.; Hall, W. E., Jr.; Mehra, R. K.
1976-01-01
The Frequency Domain Input Synthesis procedure is used in identifying the stability and control derivatives of an aircraft. By using a frequency-domain approach, one can handle criteria that are not easily handled by the time-domain approaches. Numerical results are presented for optimal elevator deflections to estimate the longitudinal stability and control derivatives subject to root-mean square constraints on the input. The applicability of the steady state optimal inputs to finite duration flight testing is investigated. The steady state approximation of frequency-domain synthesis is good for data lengths greater than two time cycles for the short period mode of the aircraft longitudinal motions. Phase relationships between different frequency components become important for shorter data lengths. The frequency domain inputs are shown to be much better than the conventional doublet inputs.
Mechanical surface treatment of steel-Optimization parameters of regime
Laouar, L.; Hamadache, H.; Saad, S.; Bouchelaghem, A.; Mekhilef, S.
2009-11-01
Mechanical treatment process by superficial plastic deformation is employed for finished mechanical part surface. It introduces structural modifications that offer to basic material new properties witch give a high quality of physical and geometrical on superficial layers. This study focuses on the application of burnishing treatment (ball burnishing) on XC48 steel and parameters optimisation of treatment regime. Three important parameters were considered: burnishing force ' Py', burnishing feed 'f' and ball radius 'r'. An empirical model has been developed to illustrate the relationship between these parameters and superficial layer characteristics defined by surface roughness ' Ra' and superficial hardness ' Hv'. A program was developed in order to determine the optimum treatment regimes for each characteristic.
Parameter estimation and optimal experimental design in flow reactors
Carraro, Thomas
2005-01-01
In this work we present numerical techniques, based on the finite element method, for the simulation of reactive flows in a chemical flow reactor as well as for the identification of the kinetic of the reactions using measurements of observable quantities. We present the case of a real experiment in which the reaction rate is estimated by means of concentration measurements. We introduce methods for the optimal experimental design of experiments in the context of reactive flows modeled by par...
Sukanta Nama
2016-04-01
Full Text Available Differential evolution (DE is an effective and powerful approach and it has been widely used in different environments. However, the performance of DE is sensitive to the choice of control parameters. Thus, to obtain optimal performance, time-consuming parameter tuning is necessary. Backtracking Search Optimization Algorithm (BSA is a new evolutionary algorithm (EA for solving real-valued numerical optimization problems. An ensemble algorithm called E-BSADE is proposed which incorporates concepts from DE and BSA. The performance of E-BSADE is evaluated on several benchmark functions and is compared with basic DE, BSA and conventional DE mutation strategy.
GPU based Monte Carlo for PET image reconstruction: parameter optimization
This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)
Design of Digital Imaging System for Optimization of Control Parameters
SONG Yong; HAO Qun; YANG Guang; SUN Hong-wei
2007-01-01
The design of experimental system of digital imaging system for control parameter is discussed in detail. Signal processing of digital CCD imaging system is first analyzed. Then the real time control of CCD driver and digital processing circuit and man-machine interaction are achieved by the design of digital CCD imaging module and control module. Experimental results indicate that the image quality of CCD experimental system makes a good response to the change of control parameters. The system gives an important base for improving image quality and the applicability of micro imaging system in complex environment.
Optimization of control parameters for petroleum waste composting
无
2001-01-01
Composting is being widely employed in the treatment of petroleum waste. The purpose of this study was to find the optimum control parameters for petroleum waste in-vessel composting. Various physical and chemical parameters were monitored to evaluate their influence on the microbial communities present in composting. The CO2 evolution and the number of microorganisms were measured as theactivity of composting. The results demonstrated that the optimum temperature, pH and moisture content were 56.5-59.5, 7.0-8.5 and 55%-60%, respectively. Under the optimum conditions, the removal efficiency of petroleum hydrocarbon reached 83.29% after 30 days composting.
Model of Stochastic Automation Asymptotically Optimal Behavior for Inter-budget Regulation
Elena D. Streltsova
2013-01-01
Full Text Available This paper is focused on the topical issue of inter-budget control in the structure ↔ by applying econometric models. To create the decision-making model, mathematical tool of the theory of stochastic automation, operating in random environments was used. On the basis of the application of this mathematical tool, the adaptive training economic and mathematical model, able to adapt to the environment, maintained by the income from the payment of federal and regional taxes and fees, payable to the budget of the constituent entity of the RF and paid to the budget of a lower level in the form of budget regulation was developed. The authors have developed the structure of the machine, described its behavior in a random environment and introduced the expression for the final probabilities of machine in each of its states. The behavioral aspect of the machine by means of a mathematically rigorous proof of the theorem on the feasibility of behavior and the asymptotic optimality of the proposed design of the machine were presented.
Serag, Ahmed; Wenzel, Fabian; Thiele, Frank; Buchert, Ralph; Young, Stewart
2009-02-01
FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with suspected dementia is not straightforward, since patients are imaged at different stages of progression of neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features selected.