WorldWideScience

Sample records for automated parameter optimization

  1. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Vickie E.; Borreguero, Jose M. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Bhowmik, Debsindhu [Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Ganesh, Panchapakesan; Sumpter, Bobby G. [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Proffen, Thomas E. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Goswami, Monojoy, E-mail: goswamim@ornl.gov [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States)

    2017-07-01

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parameters which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.

  2. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    International Nuclear Information System (INIS)

    Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang

    2009-01-01

    Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way

  3. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    Directory of Open Access Journals (Sweden)

    Wenz Frederik

    2009-09-01

    Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The

  4. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavio...

  5. Infrared Drying Parameter Optimization

    Science.gov (United States)

    Jackson, Matthew R.

    In recent years, much research has been done to explore direct printing methods, such as screen and inkjet printing, as alternatives to the traditional lithographic process. The primary motivation is reduction of the material costs associated with producing common electronic devices. Much of this research has focused on developing inkjet or screen paste formulations that can be printed on a variety of substrates, and which have similar conductivity performance to the materials currently used in the manufacturing of circuit boards and other electronic devices. Very little research has been done to develop a process that would use direct printing methods to manufacture electronic devices in high volumes. This study focuses on developing and optimizing a drying process for conductive copper ink in a high volume manufacturing setting. Using an infrared (IR) dryer, it was determined that conductive copper prints could be dried in seconds or minutes as opposed to tens of minutes or hours that it would take with other drying devices, such as a vacuum oven. In addition, this study also identifies significant parameters that can affect the conductivity of IR dried prints. Using designed experiments and statistical analysis; the dryer parameters were optimized to produce the best conductivity performance for a specific ink formulation and substrate combination. It was determined that for an ethylene glycol, butanol, 1-methoxy 2- propanol ink formulation printed on Kapton, the optimal drying parameters consisted of a dryer height of 4 inches, a temperature setting between 190 - 200°C, and a dry time of 50-65 seconds depending on the printed film thickness as determined by the number of print passes. It is important to note that these parameters are optimized specifically for the ink formulation and substrate used in this study. There is still much research that needs to be done into optimizing the IR dryer for different ink substrate combinations, as well as developing a

  6. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  7. Buncher system parameter optimization

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1981-01-01

    A least-squares algorithm is presented to calculate the RF amplitudes and cavity spacings for a series of buncher cavities each resonating at a frequency that is a multiple of a fundamental frequency of interest. The longitudinal phase-space distribution, obtained by particle tracing through the bunching system, is compared to a desired distribution function of energy and phase. The buncher cavity parameters are adjusted to minimize the difference between these two distributions. Examples are given for zero space charge. The manner in which the method can be extended to include space charge using the 3-D space-charge calculation procedure is indicated

  8. Multivariate optimization of ILC parameters

    International Nuclear Information System (INIS)

    Bazarov, I.V.; Padamsee, H.S.

    2005-01-01

    We present results of multiobjective optimization of the International Linear Collider (ILC) which seeks to maximize luminosity at each given total cost of the linac (capital and operating costs of cryomodules, refrigeration and RF). Evolutionary algorithms allow quick exploration of optimal sets of parameters in a complicated system such as ILC in the presence of realistic constraints as well as investigation of various what-if scenarios in potential performance. Among the parameters we varied there were accelerating gradient and Q of the cavities (in a coupled manner following a realistic Q vs. E curve), the number of particles per bunch, the bunch length, number of bunches in the train, etc. We find an optimum which decreases (relative to TESLA TDR baseline) the total linac cost by 22%, capital cost by 25% at the same luminosity of 3 x 10 38 m -2 s -1 . For this optimum the gradient is 35 MV/m, the final spot size is 3.6 nm, and the beam power is 15.9 MV/m. Changing the luminosity by 10 38 m -2 s -1 results in 10% change in the total linac cost and 4% in the capital cost. We have also explored the optimal fronts of luminosity vs. cost for several other scenarios using the same approach. (orig.)

  9. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  10. Optimization of Parameters of Asymptotically Stable Systems

    Directory of Open Access Journals (Sweden)

    Anna Guerman

    2011-01-01

    Full Text Available This work deals with numerical methods of parameter optimization for asymptotically stable systems. We formulate a special mathematical programming problem that allows us to determine optimal parameters of a stabilizer. This problem involves solutions to a differential equation. We show how to chose the mesh in order to obtain discrete problem guaranteeing the necessary accuracy. The developed methodology is illustrated by an example concerning optimization of parameters for a satellite stabilization system.

  11. Evaluation of GCC optimization parameters

    Directory of Open Access Journals (Sweden)

    Rodrigo D. Escobar

    2012-12-01

    Full Text Available Compile-time optimization of code can result in significant performance gains. The amount of these gains varies widely depending upon the code being optimized, the hardware being compiled for, the specific performance increase attempted (e.g. speed, throughput, memory utilization, etc. and the used compiler. We used the latest version of the SPEC CPU 2006 benchmark suite to help gain an understanding of possible performance improvements using GCC (GNU Compiler Collection options focusing mainly on speed gains made possible by tuning the compiler with the standard compiler optimization levels as well as a specific compiler option for the hardware processor. We compared the best standardized tuning options obtained for a core i7 processor, to the same relative options used on a Pentium4 to determine whether the GNU project has improved its performance tuning capabilities for specific hardware over time.

  12. Parameters control in GAs for dynamic optimization

    Directory of Open Access Journals (Sweden)

    Khalid Jebari

    2013-02-01

    Full Text Available The Control of Genetic Algorithms parameters allows to optimize the search process and improves the performance of the algorithm. Moreover it releases the user to dive into a game process of trial and failure to find the optimal parameters.

  13. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  14. Optimization of electrospinning parameters for chitosan nanofibres

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-06-01

    Full Text Available Electrospinning of chitosan, a naturally occurring polysaccharide biopolymer, has been investigated. In this paper, the authors report the optimization of electrospinning process and solution parameters using factorial design approach to obtain...

  15. Automated Robust Maneuver Design and Optimization

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is seeking improvements to the current technologies related to Position, Navigation and Timing. In particular, it is desired to automate precise maneuver...

  16. Optimization of Agrobacterium -mediated transformation parameters ...

    African Journals Online (AJOL)

    Agrobacterium-mediated transformation factors for sweet potato embryogenic calli were optimized using -glucuronidase (GUS) as a reporter. The binary vector pTCK303 harboring the modified GUS gene driven by the CaMV 35S promoter was used. Transformation parameters were optimized including bacterial ...

  17. Basic MR sequence parameters systematically bias automated brain volume estimation

    International Nuclear Information System (INIS)

    Haller, Sven; Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte; Meuli, Reto; Thiran, Jean-Philippe; Krueger, Gunnar; Lovblad, Karl-Olof; Kober, Tobias

    2016-01-01

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  18. Basic MR sequence parameters systematically bias automated brain volume estimation

    Energy Technology Data Exchange (ETDEWEB)

    Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)

    2016-11-15

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  19. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  20. Optimal Laser Phototherapy Parameters for Pain Relief.

    Science.gov (United States)

    Kate, Rohit J; Rubatt, Sarah; Enwemeka, Chukuka S; Huddleston, Wendy E

    2018-03-27

    Studies on laser phototherapy for pain relief have used parameters that vary widely and have reported varying outcomes. The purpose of this study was to determine the optimal parameter ranges of laser phototherapy for pain relief by analyzing data aggregated from existing primary literature. Original studies were gathered from available sources and were screened to meet the pre-established inclusion criteria. The included articles were then subjected to meta-analysis using Cohen's d statistic for determining treatment effect size. From these studies, ranges of the reported parameters that always resulted into large effect sizes were determined. These optimal ranges were evaluated for their accuracy using leave-one-article-out cross-validation procedure. A total of 96 articles met the inclusion criteria for meta-analysis and yielded 232 effect sizes. The average effect size was highly significant: d = +1.36 (confidence interval [95% CI] = 1.04-1.68). Among all the parameters, total energy was found to have the greatest effect on pain relief and had the most prominent optimal ranges of 120-162 and 15.36-20.16 J, which always resulted in large effect sizes. The cross-validation accuracy of the optimal ranges for total energy was 68.57% (95% CI = 53.19-83.97). Fewer and less-prominent optimal ranges were obtained for the energy density and duration parameters. None of the remaining parameters was found to be independently related to pain relief outcomes. The findings of meta-analysis indicate that laser phototherapy is highly effective for pain relief. Based on the analysis of parameters, total energy can be optimized to yield the largest effect on pain relief.

  1. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  2. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  3. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  4. Optimal parameters uncoupling vibration modes of oscillators

    Science.gov (United States)

    Le, K. C.; Pieper, A.

    2017-07-01

    This paper proposes a novel optimization concept for an oscillator with two degrees of freedom. By using specially defined motion ratios, we control the action of springs to each degree of freedom of the oscillator. We aim at showing that, if the potential action of the springs in one period of vibration, used as the payoff function for the conservative oscillator, is maximized among all admissible parameters and motions satisfying Lagrange's equations, then the optimal motion ratios uncouple vibration modes. A similar result holds true for the dissipative oscillator having dampers. The application to optimal design of vehicle suspension is discussed.

  5. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.

  6. Optimization of Robotic Spray Painting process Parameters using Taguchi Method

    Science.gov (United States)

    Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar

    2018-02-01

    Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.

  7. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  8. Chickpea seeds germination rational parameters optimization

    Science.gov (United States)

    Safonova, Yu A.; Ivliev, M. N.; Lemeshkin, A. V.

    2018-05-01

    The paper presents the influence of chickpea seeds bioactivation parameters on their enzymatic activity experimental results. Optimal bioactivation process modes were obtained by regression-factor analysis: process temperature - 13.6 °C, process duration - 71.5 h. It was found that in the germination process, the proteolytic, amylolytic and lipolytic enzymes activity increased, and the urease enzyme activity is reduced. The dependences of enzyme activity on chickpea seeds germination conditions were obtained by mathematical processing of experimental data. The calculated data are in good agreement with the experimental ones. This confirms the optimization efficiency based on experiments mathematical planning in order to determine the enzymatic activity of chickpea seeds germination optimal parameters of bioactivated seeds.

  9. Automated firewall analytics design, configuration and optimization

    CERN Document Server

    Al-Shaer, Ehab

    2014-01-01

    This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author

  10. Automated beam steering using optimal control

    Energy Technology Data Exchange (ETDEWEB)

    Allen, C. K. (Christopher K.)

    2004-01-01

    We present a steering algorithm which, with the aid of a model, allows the user to specify beam behavior throughout a beamline, rather than just at specified beam position monitor (BPM) locations. The model is used primarily to compute the values of the beam phase vectors from BPM measurements, and to define cost functions that describe the steering objectives. The steering problem is formulated as constrained optimization problem; however, by applying optimal control theory we can reduce it to an unconstrained optimization whose dimension is the number of control signals.

  11. Towards automated diffraction tomography. Part II-Cell parameter determination

    International Nuclear Information System (INIS)

    Kolb, U.; Gorelik, T.; Otten, M.T.

    2008-01-01

    Automated diffraction tomography (ADT) allows the collection of three-dimensional (3d) diffraction data sets from crystals down to a size of only few nanometres. Imaging is done in STEM mode, and diffraction data are collected with quasi-parallel beam nanoelectron diffraction (NED). Here, we present a set of developed processing steps necessary for automatic unit-cell parameter determination from the collected 3d diffraction data. Cell parameter determination is done via extraction of peak positions from a recorded data set (called the data reduction path) followed by subsequent cluster analysis of difference vectors. The procedure of lattice parameter determination is presented in detail for a beam-sensitive organic material. Independently, we demonstrate a potential (called the full integration path) based on 3d reconstruction of the reciprocal space visualising special structural features of materials such as partial disorder. Furthermore, we describe new features implemented into the acquisition part

  12. Optimization of parameters of heat exchangers vehicles

    Directory of Open Access Journals (Sweden)

    Andrei MELEKHIN

    2014-09-01

    Full Text Available The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  13. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  14. Controller Design Automation for Aeroservoelastic Design Optimization of Wind Turbines

    NARCIS (Netherlands)

    Ashuri, T.; Van Bussel, G.J.W.; Zaayer, M.B.; Van Kuik, G.A.M.

    2010-01-01

    The purpose of this paper is to integrate the controller design of wind turbines with structure and aerodynamic analysis and use the final product in the design optimization process (DOP) of wind turbines. To do that, the controller design is automated and integrated with an aeroelastic simulation

  15. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  16. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  17. An efficient automated parameter tuning framework for spiking neural networks.

    Science.gov (United States)

    Carlson, Kristofor D; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L

    2014-01-01

    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.

  18. A fully-automated software pipeline for integrating breast density and parenchymal texture analysis for digital mammograms: parameter optimization in a case-control breast cancer risk assessment study

    Science.gov (United States)

    Zheng, Yuanjie; Wang, Yan; Keller, Brad M.; Conant, Emily; Gee, James C.; Kontos, Despina

    2013-02-01

    Estimating a woman's risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.

  19. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  20. Automated inference procedure for the determination of cell growth parameters

    Science.gov (United States)

    Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.

    2016-01-01

    The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.

  1. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    Science.gov (United States)

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce

  2. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized

  3. Geometry Based Design Automation : Applied to Aircraft Modelling and Optimization

    OpenAIRE

    Amadori, Kristian

    2012-01-01

    Product development processes are continuously challenged by demands for increased efficiency. As engineering products become more and more complex, efficient tools and methods for integrated and automated design are needed throughout the development process. Multidisciplinary Design Optimization (MDO) is one promising technique that has the potential to drastically improve concurrent design. MDO frameworks combine several disciplinary models with the aim of gaining a holistic perspective of ...

  4. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  5. Optimization of nonlinear wave function parameters

    International Nuclear Information System (INIS)

    Shepard, R.; Minkoff, M.; Chemistry

    2006-01-01

    An energy-based optimization method is presented for our recently developed nonlinear wave function expansion form for electronic wave functions. This expansion form is based on spin eigenfunctions, using the graphical unitary group approach (GUGA). The wave function is expanded in a basis of product functions, allowing application to closed-shell and open-shell systems and to ground and excited electronic states. Each product basis function is itself a multiconfigurational function that depends on a relatively small number of nonlinear parameters called arc factors. The energy-based optimization is formulated in terms of analytic arc factor gradients and orbital-level Hamiltonian matrices that correspond to a specific kind of uncontraction of each of the product basis functions. These orbital-level Hamiltonian matrices give an intuitive representation of the energy in terms of disjoint subsets of the arc factors, they provide for an efficient computation of gradients of the energy with respect to the arc factors, and they allow optimal arc factors to be determined in closed form for subspaces of the full variation problem. Timings for energy and arc factor gradient computations involving expansion spaces of > 10 24 configuration state functions are reported. Preliminary convergence studies and molecular dissociation curves are presented for some small molecules

  6. Automated parameter tuning applied to sea ice in a global climate model

    Science.gov (United States)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2018-01-01

    This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  7. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  8. Hybrid Disease Diagnosis Using Multiobjective Optimization with Evolutionary Parameter Optimization

    Directory of Open Access Journals (Sweden)

    MadhuSudana Rao Nalluri

    2017-01-01

    Full Text Available With the widespread adoption of e-Healthcare and telemedicine applications, accurate, intelligent disease diagnosis systems have been profoundly coveted. In recent years, numerous individual machine learning-based classifiers have been proposed and tested, and the fact that a single classifier cannot effectively classify and diagnose all diseases has been almost accorded with. This has seen a number of recent research attempts to arrive at a consensus using ensemble classification techniques. In this paper, a hybrid system is proposed to diagnose ailments using optimizing individual classifier parameters for two classifier techniques, namely, support vector machine (SVM and multilayer perceptron (MLP technique. We employ three recent evolutionary algorithms to optimize the parameters of the classifiers above, leading to six alternative hybrid disease diagnosis systems, also referred to as hybrid intelligent systems (HISs. Multiple objectives, namely, prediction accuracy, sensitivity, and specificity, have been considered to assess the efficacy of the proposed hybrid systems with existing ones. The proposed model is evaluated on 11 benchmark datasets, and the obtained results demonstrate that our proposed hybrid diagnosis systems perform better in terms of disease prediction accuracy, sensitivity, and specificity. Pertinent statistical tests were carried out to substantiate the efficacy of the obtained results.

  9. Automated Planning of Tangential Breast Intensity-Modulated Radiotherapy Using Heuristic Optimization

    International Nuclear Information System (INIS)

    Purdie, Thomas G.; Dinniwell, Robert E.; Letourneau, Daniel; Hill, Christine; Sharpe, Michael B.

    2011-01-01

    Purpose: To present an automated technique for two-field tangential breast intensity-modulated radiotherapy (IMRT) treatment planning. Method and Materials: A total of 158 planned patients with Stage 0, I, and II breast cancer treated using whole-breast IMRT were retrospectively replanned using automated treatment planning tools. The tools developed are integrated into the existing clinical treatment planning system (Pinnacle 3 ) and are designed to perform the manual volume delineation, beam placement, and IMRT treatment planning steps carried out by the treatment planning radiation therapist. The automated algorithm, using only the radio-opaque markers placed at CT simulation as inputs, optimizes the tangential beam parameters to geometrically minimize the amount of lung and heart treated while covering the whole-breast volume. The IMRT parameters are optimized according to the automatically delineated whole-breast volume. Results: The mean time to generate a complete treatment plan was 6 min, 50 s ± 1 min 12 s. For the automated plans, 157 of 158 plans (99%) were deemed clinically acceptable, and 138 of 158 plans (87%) were deemed clinically improved or equal to the corresponding clinical plan when reviewed in a randomized, double-blinded study by one experienced breast radiation oncologist. In addition, overall the automated plans were dosimetrically equivalent to the clinical plans when scored for target coverage and lung and heart doses. Conclusion: We have developed robust and efficient automated tools for fully inversed planned tangential breast IMRT planning that can be readily integrated into clinical practice. The tools produce clinically acceptable plans using only the common anatomic landmarks from the CT simulation process as an input. We anticipate the tools will improve patient access to high-quality IMRT treatment by simplifying the planning process and will reduce the effort and cost of incorporating more advanced planning into clinical practice.

  10. GA based CNC turning center exploitation process parameters optimization

    Directory of Open Access Journals (Sweden)

    Z. Car

    2009-01-01

    Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.

  11. SAE2.py: a python script to automate parameter studies using SCREAMER with application to magnetic switching on Z

    International Nuclear Information System (INIS)

    Orndorff-Plunkett, Franklin

    2011-01-01

    The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.

  12. Optimal Control of Connected and Automated Vehicles at Roundabouts

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Liuhui [University of Delaware; Malikopoulos, Andreas [ORNL; Rios-Torres, Jackeline [ORNL

    2018-01-01

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulation and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.

  13. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  14. Optimization of rotational arc station parameter optimized radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P.; Ungun, B. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Boyd, S. [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Xing, L., E-mail: lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States)

    2016-09-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  15. Optimization of rotational arc station parameter optimized radiation therapy

    International Nuclear Information System (INIS)

    Dong, P.; Ungun, B.; Boyd, S.; Xing, L.

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  16. [Research and Design of a System for Detecting Automated External Defbrillator Performance Parameters].

    Science.gov (United States)

    Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai

    2017-09-30

    In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.

  17. Optimal parameters for laser tissue soldering

    Science.gov (United States)

    McNally-Heintzelman, Karen M.; Sorg, Brian S.; Chan, Eric K.; Welch, Ashley J.; Dawes, Judith M.; Owen, Earl R.

    1998-07-01

    Variations in laser irradiance, exposure time, solder composition, chromophore type and concentration have led to inconsistencies in published results of laser-solder repair of tissue. To determine optimal parameters for laser tissue soldering, an in vitro study was performed using an 808-nm diode laser in conjunction with an indocyanine green (ICG)- doped albumin protein solder to weld bovine aorta specimens. Liquid and solid protein solders prepared from 25% and 60% bovine serum albumin (BSA), respectively, were compared. The effects of laser irradiance and exposure time on tensile strength of the weld and temperature rise as well as the effect of hydration on bond stability were investigated. Optimum irradiance and exposure times were identified for each solder type. Increasing the BSA concentration from 25% to 60% greatly increased the tensile strength of the weld. A reduction in dye concentration from 2.5 mg/ml to 0.25 mg/ml was also found to result in an increase in tensile strength. The strongest welds were produced with an irradiance of 6.4 W/cm2 for 50 s using a solid protein solder composed of 60% BSA and 0.25 mg/ml ICG. Steady-state solder surface temperatures were observed to reach 85 plus or minus 5 degrees Celsius with a temperature gradient across the solid protein solder strips of between 15 and 20 degrees Celsius. Finally, tensile strength was observed to decrease significantly (20 to 25%) after the first hour of hydration in phosphate-buffered saline. No appreciable change was observed in the strength of the tissue bonds with further hydration.

  18. PARAMETER COORDINATION AND ROBUST OPTIMIZATION FOR MULTIDISCIPLINARY DESIGN

    Institute of Scientific and Technical Information of China (English)

    HU Jie; PENG Yinghong; XIONG Guangleng

    2006-01-01

    A new parameter coordination and robust optimization approach for multidisciplinary design is presented. Firstly, the constraints network model is established to support engineering change, coordination and optimization. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. Secondly, the parameter coordination method is presented to solve the constraints network model, monitor the potential conflicts due to engineering changes, and obtain the consistency solution space corresponding to the given product specifications. Finally, the robust parameter optimization model is established, and genetic arithmetic is used to obtain the robust optimization parameter. An example of bogie design is analyzed to show the scheme to be effective.

  19. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  20. Automated dual-wavelength spectrophotometer optimized for phytochrome assay

    International Nuclear Information System (INIS)

    Pratt, L.H.; Wampler, J.E.; Rich, E.S. Jr.

    1985-01-01

    A microcomputer-controlled dual-wavelength spectrophotometer suitable for automated phytochrome assay is described. The optomechanical unit provides for sequential irradiation of the sample by the two measuring wavelengths with intervening dark intervals and for actinic irradiation to interconvert phytochrome between its two forms. Photomultiplier current is amplified, converted to a digital value and transferred into the computer using a custom-designed IEEE-488 bus interface. The microcomputer calculates mathematically both absorbance and absorbance difference values with dynamic correction for photomultiplier dark current. In addition, the computer controls the operating parameters of the spectrophotometer via a separate interface. These parameters include control of the durations of measuring and actinic irradiation intervals and their sequence. 14 references, 4 figures

  1. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  2. GA BASED GLOBAL OPTIMAL DESIGN PARAMETERS FOR ...

    African Journals Online (AJOL)

    Journal of Modeling, Design and Management of Engineering Systems ... DESIGN PARAMETERS FOR CONSECUTIVE REACTIONS IN SERIALLY CONNECTED ... for the process equipments such as chemical reactors used in industries.

  3. Analysis Balance Parameter of Optimal Ramp metering

    Science.gov (United States)

    Li, Y.; Duan, N.; Yang, X.

    2018-05-01

    Ramp metering is a motorway control method to avoid onset congestion through limiting the access of ramp inflows into the main road of the motorway. The optimization model of ramp metering is developed based upon cell transmission model (CTM). With the piecewise linear structure of CTM, the corresponding motorway traffic optimization problem can be formulated as a linear programming (LP) problem. It is known that LP problem can be solved by established solution algorithms such as SIMPLEX or interior-point methods for the global optimal solution. The commercial software (CPLEX) is adopted in this study to solve the LP problem within reasonable computational time. The concept is illustrated through a case study of the United Kingdom M25 Motorway. The optimal solution provides useful insights and guidances on how to manage motorway traffic in order to maximize the corresponding efficiency.

  4. Optimization of regeneration and transformation parameters in ...

    African Journals Online (AJOL)

    PRECIOUS

    transformation and regeneration therefore optimization of these two factors is .... An analysis of variance was conducted using explants types x construct ... and significant differences between means were assessed by the. Tukey's test at 1 and ...

  5. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio; Knio, Omar

    2014-01-01

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation

  6. Optimization of process and solution parameters in electrospinning polyethylene oxide

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-11-01

    Full Text Available This paper reports the optimization of electrospinning process and solution parameters using factorial design approach to obtain uniform polyethylene oxide (PEO) nanofibers. The parameters studied were distance between nozzle and collector screen...

  7. Nanohydroxyapatite synthesis using optimized process parameters ...

    Indian Academy of Sciences (India)

    3Energy Research Group, School of Engineering, Taylor's University, 47500 ... influence of different ultrasonication parameters on the prop- ... to evaluate multiple process parameters and their interaction. ..... dent and dependent variables by a 3-D representation of .... The intensities of O–H functional groups are seen to.

  8. Parameter identification using optimization techniques in the continuous simulation programs FORSIM and MACKSIM

    International Nuclear Information System (INIS)

    Carver, M.B.; Austin, C.F.; Ross, N.E.

    1980-02-01

    This report discusses the mechanics of automated parameter identification in simulation packages, and reviews available integration and optimization algorithms and their interaction within the recently developed optimization options in the FORSIM and MACKSIM simulation packages. In the MACKSIM mass-action chemical kinetics simulation package, the form and structure of the ordinary differential equations involved is known, so the implementation of an optimizing option is relatively straightforward. FORSIM, however, is designed to integrate ordinary and partial differential equations of abritrary definition. As the form of the equations is not known in advance, the design of the optimizing option is more intricate, but the philosophy could be applied to most simulation packages. In either case, however, the invocation of the optimizing interface is simple and user-oriented. Full details for the use of the optimizing mode for each program are given; specific applications are used as examples. (O.T.)

  9. Future xenon system operational parameter optimization

    International Nuclear Information System (INIS)

    Lowrey, J.D.; Eslinger, P.W.; Miley, H.S.

    2016-01-01

    Any atmospheric monitoring network will have practical limitations in the density of its sampling stations. The classical approach to network optimization has been to have 12 or 24-h integration of air samples at the highest station density possible to improve minimum detectable concentrations. The authors present here considerations on optimizing sampler integration time to make the best use of any network and maximize the likelihood of collecting quality samples at any given location. In particular, this work makes the case that shorter duration sample integration (i.e. <12 h) enhances critical isotopic information and improves the source location capability of a radionuclide network, or even just one station. (author)

  10. Nanohydroxyapatite synthesis using optimized process parameters

    Indian Academy of Sciences (India)

    Nanohydroxyapatite; ultrasonication; response surface methodology; calcination; ... Three independent process parameters: temperature () (70, 80 and 90°C), ... Bangi, Selangor, Malaysia; Energy Research Group, School of Engineering, ...

  11. Automated Design and Optimization of Pebble-bed Reactor Cores

    International Nuclear Information System (INIS)

    Gougar, Hans D.; Ougouag, Abderrafi M.; Terry, William K.

    2010-01-01

    We present a conceptual design approach for high-temperature gas-cooled reactors using recirculating pebble-bed cores. The design approach employs PEBBED, a reactor physics code specifically designed to solve for and analyze the asymptotic burnup state of pebble-bed reactors, in conjunction with a genetic algorithm to obtain a core that maximizes a fitness value that is a function of user-specified parameters. The uniqueness of the asymptotic core state and the small number of independent parameters that define it suggest that core geometry and fuel cycle can be efficiently optimized toward a specified objective. PEBBED exploits a novel representation of the distribution of pebbles that enables efficient coupling of the burnup and neutron diffusion solvers. With this method, even complex pebble recirculation schemes can be expressed in terms of a few parameters that are amenable to modern optimization techniques. With PEBBED, the user chooses the type and range of core physics parameters that represent the design space. A set of traits, each with acceptable and preferred values expressed by a simple fitness function, is used to evaluate the candidate reactor cores. The stochastic search algorithm automatically drives the generation of core parameters toward the optimal core as defined by the user. The optimized design can then be modeled and analyzed in greater detail using higher resolution and more computationally demanding tools to confirm the desired characteristics. For this study, the design of pebble-bed high temperature reactor concepts subjected to demanding physical constraints demonstrated the efficacy of the PEBBED algorithm.

  12. Applications of the Automated SMAC Modal Parameter Extraction Package

    International Nuclear Information System (INIS)

    MAYES, RANDALL L.; DORRELL, LARRY R.; KLENKE, SCOTT E.

    1999-01-01

    An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. The new capabilities of the automated version are demonstrated on test data from a complex shell/payload system. Examples of extractions from impact and shaker data are shown. The automated algorithm extracts 30 to 50 modes in the bandwidth from each column of the frequency response function matrix. Examples of the synthesized Mode Indicator Functions (MIFs) compared with the actual MIFs show the accuracy of the technique. A data set for one input and 170 accelerometer outputs can typically be reduced in an hour. Application to a test with some complex modes is also demonstrated

  13. Hybrid computer optimization of systems with random parameters

    Science.gov (United States)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  14. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  15. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad; Canini, Marco

    2017-01-01

    for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing

  16. Optimization of Milling Parameters Employing Desirability Functions

    Science.gov (United States)

    Ribeiro, J. L. S.; Rubio, J. C. Campos; Abrão, A. M.

    2011-01-01

    The principal aim of this paper is to investigate the influence of tool material (one cermet and two coated carbide grades), cutting speed and feed rate on the machinability of hardened AISI H13 hot work steel, in order to identify the cutting conditions which lead to optimal performance. A multiple response optimization procedure based on tool life, surface roughness, milling forces and the machining time (required to produce a sample cavity) was employed. The results indicated that the TiCN-TiN coated carbide and cermet presented similar results concerning the global optimum values for cutting speed and feed rate per tooth, outperforming the TiN-TiCN-Al2O3 coated carbide tool.

  17. Optimizing wireless LAN for longwall coal mine automation

    Energy Technology Data Exchange (ETDEWEB)

    Hargrave, C.O.; Ralston, J.C.; Hainsworth, D.W. [Exploration & Mining Commonwealth Science & Industrial Research Organisation, Pullenvale, Qld. (Australia)

    2007-01-15

    A significant development in underground longwall coal mining automation has been achieved with the successful implementation of wireless LAN (WLAN) technology for communication on a longwall shearer. WIreless-FIdelity (Wi-Fi) was selected to meet the bandwidth requirements of the underground data network, and several configurations were installed on operating longwalls to evaluate their performance. Although these efforts demonstrated the feasibility of using WLAN technology in longwall operation, it was clear that new research and development was required in order to establish optimal full-face coverage. By undertaking an accurate characterization of the target environment, it has been possible to achieve great improvements in WLAN performance over a nominal Wi-Fi installation. This paper discusses the impact of Fresnel zone obstructions and multipath effects on radio frequency propagation and reports an optimal antenna and system configuration. Many of the lessons learned in the longwall case are immediately applicable to other underground mining operations, particularly wherever there is a high degree of obstruction from mining equipment.

  18. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    OpenAIRE

    Zhang, Xiangsheng; Pan, Feng

    2015-01-01

    Aimed at the parameters optimization in support vector machine (SVM) for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO) algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effective...

  19. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    Directory of Open Access Journals (Sweden)

    Xiangsheng Zhang

    2015-01-01

    Full Text Available Aimed at the parameters optimization in support vector machine (SVM for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effectiveness of the proposed algorithm.

  20. Automated Modal Parameter Estimation of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice

    In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...

  1. Optimization of exposure parameters in full field digital mammography

    International Nuclear Information System (INIS)

    Williams, Mark B.; Raghunathan, Priya; More, Mitali J.; Seibert, J. Anthony; Kwan, Alexander; Lo, Joseph Y.; Samei, Ehsan; Ranger, Nicole T.; Fajardo, Laurie L.; McGruder, Allen; McGruder, Sandra M.; Maidment, Andrew D. A.; Yaffe, Martin J.; Bloomquist, Aili; Mawdsley, Gordon E.

    2008-01-01

    Optimization of exposure parameters (target, filter, and kVp) in digital mammography necessitates maximization of the image signal-to-noise ratio (SNR), while simultaneously minimizing patient dose. The goal of this study is to compare, for each of the major commercially available full field digital mammography (FFDM) systems, the impact of the selection of technique factors on image SNR and radiation dose for a range of breast thickness and tissue types. This phantom study is an update of a previous investigation and includes measurements on recent versions of two of the FFDM systems discussed in that article, as well as on three FFDM systems not available at that time. The five commercial FFDM systems tested, the Senographe 2000D from GE Healthcare, the Mammomat Novation DR from Siemens, the Selenia from Hologic, the Fischer Senoscan, and Fuji's 5000MA used with a Lorad M-IV mammography unit, are located at five different university test sites. Performance was assessed using all available x-ray target and filter combinations and nine different phantom types (three compressed thicknesses and three tissue composition types). Each phantom type was also imaged using the automatic exposure control (AEC) of each system to identify the exposure parameters used under automated image acquisition. The figure of merit (FOM) used to compare technique factors is the ratio of the square of the image SNR to the mean glandular dose. The results show that, for a given target/filter combination, in general FOM is a slowly changing function of kVp, with stronger dependence on the choice of target/filter combination. In all cases the FOM was a decreasing function of kVp at the top of the available range of kVp settings, indicating that higher tube voltages would produce no further performance improvement. For a given phantom type, the exposure parameter set resulting in the highest FOM value was system specific, depending on both the set of available target/filter combinations, and

  2. Automated Portfolio Optimization Based on a New Test for Structural Breaks

    Directory of Open Access Journals (Sweden)

    Tobias Berens

    2014-04-01

    Full Text Available We present a completely automated optimization strategy which combines the classical Markowitz mean-variance portfolio theory with a recently proposed test for structural breaks in covariance matrices. With respect to equity portfolios, global minimum-variance optimizations, which base solely on the covariance matrix, yield considerable results in previous studies. However, financial assets cannot be assumed to have a constant covariance matrix over longer periods of time. Hence, we estimate the covariance matrix of the assets by respecting potential change points. The resulting approach resolves the issue of determining a sample for parameter estimation. Moreover, we investigate if this approach is also appropriate for timing the reoptimizations. Finally, we apply the approach to two datasets and compare the results to relevant benchmark techniques by means of an out-of-sample study. It is shown that the new approach outperforms equally weighted portfolios and plain minimum-variance portfolios on average.

  3. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  4. Optimization of an NLEO-based algorithm for automated detection of spontaneous activity transients in early preterm EEG

    International Nuclear Information System (INIS)

    Palmu, Kirsi; Vanhatalo, Sampsa; Stevenson, Nathan; Wikström, Sverre; Hellström-Westas, Lena; Palva, J Matias

    2010-01-01

    We propose here a simple algorithm for automated detection of spontaneous activity transients (SATs) in early preterm electroencephalography (EEG). The parameters of the algorithm were optimized by supervised learning using a gold standard created from visual classification data obtained from three human raters. The generalization performance of the algorithm was estimated by leave-one-out cross-validation. The mean sensitivity of the optimized algorithm was 97% (range 91–100%) and specificity 95% (76–100%). The optimized algorithm makes it possible to systematically study brain state fluctuations of preterm infants. (note)

  5. Automated magnetic divertor design for optimal power exhaust

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten

    2017-07-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation

  6. Automated magnetic divertor design for optimal power exhaust

    International Nuclear Information System (INIS)

    Blommaert, Maarten

    2017-01-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation. These flaws

  7. Optimizing a Water Simulation based on Wavefront Parameter Optimization

    OpenAIRE

    Lundgren, Martin

    2017-01-01

    DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...

  8. Automated IMRT planning with regional optimization using planning scripts.

    Science.gov (United States)

    Xhaferllari, Ilma; Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff

    2013-01-07

    Intensity-modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time-consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases.

  9. Optimizing a Drone Network to Deliver Automated External Defibrillators.

    Science.gov (United States)

    Boutilier, Justin J; Brooks, Steven C; Janmohamed, Alyf; Byers, Adam; Buick, Jason E; Zhan, Cathy; Schoellig, Angela P; Cheskes, Sheldon; Morrison, Laurie J; Chan, Timothy C Y

    2017-06-20

    Public access defibrillation programs can improve survival after out-of-hospital cardiac arrest, but automated external defibrillators (AEDs) are rarely available for bystander use at the scene. Drones are an emerging technology that can deliver an AED to the scene of an out-of-hospital cardiac arrest for bystander use. We hypothesize that a drone network designed with the aid of a mathematical model combining both optimization and queuing can reduce the time to AED arrival. We applied our model to 53 702 out-of-hospital cardiac arrests that occurred in the 8 regions of the Toronto Regional RescuNET between January 1, 2006, and December 31, 2014. Our primary analysis quantified the drone network size required to deliver an AED 1, 2, or 3 minutes faster than historical median 911 response times for each region independently. A secondary analysis quantified the reduction in drone resources required if RescuNET was treated as a large coordinated region. The region-specific analysis determined that 81 bases and 100 drones would be required to deliver an AED ahead of median 911 response times by 3 minutes. In the most urban region, the 90th percentile of the AED arrival time was reduced by 6 minutes and 43 seconds relative to historical 911 response times in the region. In the most rural region, the 90th percentile was reduced by 10 minutes and 34 seconds. A single coordinated drone network across all regions required 39.5% fewer bases and 30.0% fewer drones to achieve similar AED delivery times. An optimized drone network designed with the aid of a novel mathematical model can substantially reduce the AED delivery time to an out-of-hospital cardiac arrest event. © 2017 American Heart Association, Inc.

  10. Optimization of surface roughness parameters in dry turning

    OpenAIRE

    R.A. Mahdavinejad; H. Sharifi Bidgoli

    2009-01-01

    Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view...

  11. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    Science.gov (United States)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  12. Optimal parameters of the SVM for temperature prediction

    Directory of Open Access Journals (Sweden)

    X. Shi

    2015-05-01

    Full Text Available This paper established three different optimization models in order to predict the Foping station temperature value. The dimension was reduced to change multivariate climate factors into a few variables by principal component analysis (PCA. And the parameters of support vector machine (SVM were optimized with genetic algorithm (GA, particle swarm optimization (PSO and developed genetic algorithm. The most suitable method was applied for parameter optimization by comparing the results of three different models. The results are as follows: The developed genetic algorithm optimization parameters of the predicted values were closest to the measured value after the analog trend, and it is the most fitting measured value trends, and its homing speed is relatively fast.

  13. The optimal extraction parameters and anti-diabetic activity of ...

    African Journals Online (AJOL)

    diabetic activity of FIBL on alloxan induced diabetic mice were studied. The optimal extraction parameters of FIBL were obtained by single factor test and orthogonal test, as follows: ethanol concentration 60 %, ratio of solvent to raw material 30 ...

  14. Optimization of process parameters for synthesis of silica–Ni ...

    Indian Academy of Sciences (India)

    Optimization of process parameters for synthesis of silica–Ni nanocomposite by design of experiment ... Sol–gel; Ni; design of experiments; nanocomposites. ... Kolkata 700 032, India; Rustech Products Pvt. Ltd., Kolkata 700 045, India ...

  15. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  16. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  17. Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms

    Science.gov (United States)

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and

  18. Optimization Design of Multi-Parameters in Rail Launcher System

    OpenAIRE

    Yujiao Zhang; Weinan Qin; Junpeng Liao; Jiangjun Ruan

    2014-01-01

    Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different paramete...

  19. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  20. Architecture of Automated Database Tuning Using SGA Parameters

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2012-05-01

    Full Text Available Business Data always growth from kilo byte, mega byte, giga byte, tera byte, peta byte, and so far. There is no way to avoid this increasing rate of data till business still running. Because of this issue, database tuning be critical part of a information system. Tuning a database in a cost-effective manner is a growing challenge. The total cost of ownership (TCO of information technology needs to be significantly reduced by minimizing people costs. In fact, mistakes in operations and administration of information systems are the single most reasons for system outage and unacceptable performance [3]. One way of addressing the challenge of total cost of ownership is by making information systems more self-managing. A particularly difficult piece of the ambitious vision of making database systems self-managing is the automation of database performance tuning. In this paper, we will explain the progress made thus far on this important problem. Specifically, we will propose the architecture and Algorithm for this problem.

  1. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  2. An optimization method for parameters in reactor nuclear physics

    International Nuclear Information System (INIS)

    Jachic, J.

    1982-01-01

    An optimization method for two basic problems of Reactor Physics was developed. The first is the optimization of a plutonium critical mass and the bruding ratio for fast reactors in function of the radial enrichment distribution of the fuel used as control parameter. The second is the maximization of the generation and the plutonium burnup by an optimization of power temporal distribution. (E.G.) [pt

  3. Parameter optimization toward optimal microneedle-based dermal vaccination.

    Science.gov (United States)

    van der Maaden, Koen; Varypataki, Eleni Maria; Yu, Huixin; Romeijn, Stefan; Jiskoot, Wim; Bouwstra, Joke

    2014-11-20

    Microneedle-based vaccination has several advantages over vaccination by using conventional hypodermic needles. Microneedles are used to deliver a drug into the skin in a minimally-invasive and potentially pain free manner. Besides, the skin is a potent immune organ that is highly suitable for vaccination. However, there are several factors that influence the penetration ability of the skin by microneedles and the immune responses upon microneedle-based immunization. In this study we assessed several different microneedle arrays for their ability to penetrate ex vivo human skin by using trypan blue and (fluorescently or radioactively labeled) ovalbumin. Next, these different microneedles and several factors, including the dose of ovalbumin, the effect of using an impact-insertion applicator, skin location of microneedle application, and the area of microneedle application, were tested in vivo in mice. The penetration ability and the dose of ovalbumin that is delivered into the skin were shown to be dependent on the use of an applicator and on the microneedle geometry and size of the array. Besides microneedle penetration, the above described factors influenced the immune responses upon microneedle-based vaccination in vivo. It was shown that the ovalbumin-specific antibody responses upon microneedle-based vaccination could be increased up to 12-fold when an impact-insertion applicator was used, up to 8-fold when microneedles were applied over a larger surface area, and up to 36-fold dependent on the location of microneedle application. Therefore, these influencing factors should be considered to optimize microneedle-based dermal immunization technologies. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Design of an optimal automation system : Finding a balance between a human's task engagement and exhaustion

    NARCIS (Netherlands)

    Klein, Michel; van Lambalgen, Rianne

    2011-01-01

    In demanding tasks, human performance can seriously degrade as a consequence of increased workload and limited resources. In such tasks it is very important to maintain an optimal performance quality, therefore automation assistance is required. On the other hand, automation can also impose

  5. Optimization of hydraulic turbine governor parameters based on WPA

    Science.gov (United States)

    Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao

    2018-01-01

    The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.

  6. Integral Optimization of Systematic Parameters of Flip-Flow Screens

    Institute of Scientific and Technical Information of China (English)

    翟宏新

    2004-01-01

    The synthetic index Ks for evaluating flip-flow screens is proposed and systematically optimized in view of the whole system. A series of optimized values of relevant parameters are found and then compared with those of the current industrial specifications. The results show that the optimized value Ks approaches the one of those famous flip-flow screens in the world. Some new findings on geometric and kinematics parameters are useful for improving the flip-flow screens with a low Ks value, which is helpful in developing clean coal technology.

  7. Investigation and validation of optimal cutting parameters for least ...

    African Journals Online (AJOL)

    The cutting parameters were analyzed and optimized using Box Behnken procedure in the DESIGN EXPERT environment. The effect of process parameters with the output variable were predicted which indicates that the highest cutting speed has significant role in producing least surface roughness followed by feed and ...

  8. VISUALIZATION SOFTWARE DEVELOPMENT FOR PROCEDURE OF MULTI-DIMENSIONAL OPTIMIZATION OF TECHNOLOGICAL PROCESS FUNCTIONAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    E. N. Ishakova

    2016-05-01

    Full Text Available A method for multi-criteria optimization of the design parameters for technological object is described. The existing optimization methods are overviewed, and works in the field of basic research and applied problems are analyzed. The problem is formulated, based on the process requirements, making it possible to choose the geometrical dimensions of machine tips and the flow rate of the process, so that the resulting technical and economical parameters were optimal. In the problem formulation application of the performance method adapted to a particular domain is described. Task implementation is shown; the method of characteristics creation for the studied object in view of some restrictions for parameters in both analytical and graphical representation. On the basis of theoretical research the software system is developed that gives the possibility to automate the discovery of optimal solutions for specific problems. Using available information sources, that characterize the object of study, it is possible to establish identifiers, add restrictions from the one side, and in the interval as well. Obtained result is a visual depiction of dependence of the main study parameters on the others, which may have an impact on both the flow of the process, and the quality of products. The resulting optimal area shows the use of different design options for technological object in an acceptable kinematic range that makes it possible for the researcher to choose the best design solution.

  9. Parameter optimization of electrochemical machining process using black hole algorithm

    Science.gov (United States)

    Singh, Dinesh; Shukla, Rajkamal

    2017-12-01

    Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.

  10. Multi-parameter optimization design of parabolic trough solar receiver

    International Nuclear Information System (INIS)

    Guo, Jiangfeng; Huai, Xiulan

    2016-01-01

    Highlights: • The optimal condition can be obtained by multi-parameter optimization. • Exergy and thermal efficiencies are employed as objective function. • Exergy efficiency increases at the expense of heat losses. • The heat obtained by working fluid increases as thermal efficiency grows. - Abstract: The design parameters of parabolic trough solar receiver are interrelated and interact with one another, so the optimal performance of solar receiver cannot be obtained by the convectional single-parameter optimization. To overcome the shortcoming of single-parameter optimization, a multi-parameter optimization of parabolic trough solar receiver is employed based on genetic algorithm in the present work. When the thermal efficiency is taken as the objective function, the heat obtained by working fluid increases while the average temperature of working fluid and wall temperatures of solar receiver decrease. The average temperature of working fluid and the wall temperatures of solar receiver increase while the heat obtained by working fluid decreases generally by taking the exergy efficiency as an objective function. Assuming that the solar radiation intensity remains constant, the exergy obtained by working fluid increases by taking exergy efficiency as the objective function, which comes at the expense of heat losses of solar receiver.

  11. Genetic Algorithm Optimizes Q-LAW Control Parameters

    Science.gov (United States)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  12. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    Directory of Open Access Journals (Sweden)

    Lin Ye

    2014-02-01

    Full Text Available Using the finite element method (FEM and particle swarm optimization (PSO, a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters’ effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°.

  13. Improving the automated optimization of profile extrusion dies by applying appropriate optimization areas and strategies

    Science.gov (United States)

    Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.

    2014-05-01

    The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.

  14. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  15. Optimizing chirped laser pulse parameters for electron acceleration in vacuum

    Energy Technology Data Exchange (ETDEWEB)

    Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza; Massudi, Reza, E-mail: r-massudi@sbu.ac.ir [Laser and Plasma Research Institute, Shahid Beheshti University, Tehran 1983969411 (Iran, Islamic Republic of)

    2015-11-14

    Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.

  16. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  17. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  18. On the use of PGD for optimal control applied to automated fibre placement

    Science.gov (United States)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  19. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    Science.gov (United States)

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  20. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  1. Automated design and optimization of flexible booster autopilots via linear programming, volume 1

    Science.gov (United States)

    Hauser, F. D.

    1972-01-01

    A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.

  2. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  3. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  4. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    Science.gov (United States)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  5. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    Science.gov (United States)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  6. Control parameter optimization for AP1000 reactor using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Wang, Pengfei; Wan, Jiashuang; Luo, Run; Zhao, Fuyu; Wei, Xinyu

    2016-01-01

    Highlights: • The PSO algorithm is applied for control parameter optimization of AP1000 reactor. • Key parameters of the MSHIM control system are optimized. • Optimization results are evaluated though simulations and quantitative analysis. - Abstract: The advanced mechanical shim (MSHIM) core control strategy is implemented in the AP1000 reactor for core reactivity and axial power distribution control simultaneously. The MSHIM core control system can provide superior reactor control capabilities via automatic rod control only. This enables the AP1000 to perform power change operations automatically without the soluble boron concentration adjustments. In this paper, the Particle Swarm Optimization (PSO) algorithm has been applied for the parameter optimization of the MSHIM control system to acquire better reactor control performance for AP1000. System requirements such as power control performance, control bank movement and AO control constraints are reflected in the objective function. Dynamic simulations are performed based on an AP1000 reactor simulation platform in each iteration of the optimization process to calculate the fitness values of particles in the swarm. The simulation platform is developed in Matlab/Simulink environment with implementation of a nodal core model and the MSHIM control strategy. Based on the simulation platform, the typical 10% step load decrease transient from 100% to 90% full power is simulated and the objective function used for control parameter tuning is directly incorporated in the simulation results. With successful implementation of the PSO algorithm in the control parameter optimization of AP1000 reactor, four key parameters of the MSHIM control system are optimized. It has been demonstrated by the calculation results that the optimized MSHIM control system parameters can improve the reactor power control capability and reduce the control rod movement without compromising AO control. Therefore, the PSO based optimization

  7. Parameters Investigation of Mathematical Model of Productivity for Automated Line with Availability by DMAIC Methodology

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-01-01

    Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.

  8. Parameter Optimization and Electrode Improvement of Rotary Stepper Micromotor

    Science.gov (United States)

    Sone, Junji; Mizuma, Toshinari; Mochizuki, Shunsuke; Sarajlic, Edin; Yamahata, Christophe; Fujita, Hiroyuki

    We developed a three-phase electrostatic stepper micromotor and performed a numerical simulation to improve its performance for practical use and to optimize its design. We conducted its circuit simulation by simplifying its structure, and the effect of springback force generated by supported mechanism using flexures was considered. And we considered new improvement method for electrodes. This improvement and other parameter optimizations achieved the low voltage drive of micromotor.

  9. Complicated problem solution techniques in optimal parameter searching

    International Nuclear Information System (INIS)

    Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.

    1992-01-01

    An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs

  10. Optimal Parameter Selection of Power System Stabilizer using Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyeng Hwan; Chung, Dong Il; Chung, Mun Kyu [Dong-AUniversity (Korea); Wang, Yong Peel [Canterbury Univeristy (New Zealand)

    1999-06-01

    In this paper, it is suggested that the selection method of optimal parameter of power system stabilizer (PSS) with robustness in low frequency oscillation for power system using real variable elitism genetic algorithm (RVEGA). The optimal parameters were selected in the case of power system stabilizer with one lead compensator, and two lead compensator. Also, the frequency responses characteristics of PSS, the system eigenvalues criterion and the dynamic characteristics were considered in the normal load and the heavy load, which proved usefulness of RVEGA compare with Yu's compensator design theory. (author). 20 refs., 15 figs., 8 tabs.

  11. Optimization Design of Multi-Parameters in Rail Launcher System

    Directory of Open Access Journals (Sweden)

    Yujiao Zhang

    2014-05-01

    Full Text Available Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different parameters respectively. And the orthogonal test approach is used to guide the simulation for choosing the better parameter combinations, as well reduce the calculation cost. The research shows that the proposed method gives a better result in the energy efficiency of the system. To improve the energy conversion efficiency of electromagnetic rail launchers, the selection of more parameters must be considered in the design stage, such as the width, height and length of rail, the distance between rail pair, and pulse forming inductance. However, the relationship between these parameters and energy conversion efficiency cannot be directly described by one mathematical expression. So optimization methods must be applied to conduct design. In this paper, a rail launcher with five parameters was optimized by using orthogonal test method. According to the arrangement of orthogonal table, the better parameters’ combination can be obtained through less calculation. Under the condition of different parameters’ value, field and circuit simulation analysis were made. The results show that the energy conversion efficiency of the system is increased by 71.9 % after parameters optimization.

  12. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  13. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  14. Optimization of parameters of special asynchronous electric drives

    Science.gov (United States)

    Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.

    2018-03-01

    The article considers the solution of the problem of parameters optimization of special asynchronous electric drives. The solution of the problem will allow one to project and create special asynchronous electric drives for various industries. The created types of electric drives will have optimum mass-dimensional and power parameters. It will allow one to realize and fulfill the set characteristics of management of technological processes with optimum level of expenses of electric energy, time of completing the process or other set parameters. The received decision allows one not only to solve a certain optimizing problem, but also to construct dependences between the optimized parameters of special asynchronous electric drives, for example, with the change of power, current in a winding of the stator or rotor, induction in a gap or steel of magnetic conductors and other parameters. On the constructed dependences, it is possible to choose necessary optimum values of parameters of special asynchronous electric drives and their components without carrying out repeated calculations.

  15. Automated bond order assignment as an optimization problem.

    Science.gov (United States)

    Dehof, Anna Katharina; Rurainski, Alexander; Bui, Quang Bao Anh; Böcker, Sebastian; Lenhof, Hans-Peter; Hildebrandt, Andreas

    2011-03-01

    Numerous applications in Computational Biology process molecular structures and hence strongly rely not only on correct atomic coordinates but also on correct bond order information. For proteins and nucleic acids, bond orders can be easily deduced but this does not hold for other types of molecules like ligands. For ligands, bond order information is not always provided in molecular databases and thus a variety of approaches tackling this problem have been developed. In this work, we extend an ansatz proposed by Wang et al. that assigns connectivity-based penalty scores and tries to heuristically approximate its optimum. In this work, we present three efficient and exact solvers for the problem replacing the heuristic approximation scheme of the original approach: an A*, an ILP and an fixed-parameter approach (FPT) approach. We implemented and evaluated the original implementation, our A*, ILP and FPT formulation on the MMFF94 validation suite and the KEGG Drug database. We show the benefit of computing exact solutions of the penalty minimization problem and the additional gain when computing all optimal (or even suboptimal) solutions. We close with a detailed comparison of our methods. The A* and ILP solution are integrated into the open-source C++ LGPL library BALL and the molecular visualization and modelling tool BALLView and can be downloaded from our homepage www.ball-project.org. The FPT implementation can be downloaded from http://bio.informatik.uni-jena.de/software/.

  16. Review of Automated Design and Optimization of MEMS

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca

    2007-01-01

    carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated....

  17. Comparisons of criteria in the assessment model parameter optimizations

    International Nuclear Information System (INIS)

    Liu Xinhe; Zhang Yongxing

    1993-01-01

    Three criteria (chi square, relative chi square and correlation coefficient) used in model parameter optimization (MPO) process that aims at significant reduction of prediction uncertainties were discussed and compared to each other with the aid of a well-controlled tracer experiment

  18. Statistical optimization of process parameters for the production of ...

    African Journals Online (AJOL)

    In this study, optimization of process parameters such as moisture content, incubation temperature and initial pH (fixed) for the improvement of citric acid production from oil palm empty fruit bunches through solid state bioconversion was carried out using traditional one-factor-at-a-time (OFAT) method and response surface ...

  19. Optimization of physico-chemical and nutritional parameters for ...

    African Journals Online (AJOL)

    Optimization of physico-chemical and nutritional parameters for pullulan production by a mutant of thermotolerant Aureobasidium pullulans, in fed batch ... minutes, having killing rate of 70% level, produced 6 g l-1 higher pullulan as compared to the wild type without loosing thermotolerant and non-melanin producing ability.

  20. Optimization of machining parameters of hard porcelain on a CNC ...

    African Journals Online (AJOL)

    Optimization of machining parameters of hard porcelain on a CNC machine by Taguchi-and RSM method. ... Journal Home > Vol 10, No 1 (2018) > ... The conduct of experiments was made by employing the Taguchi's L27 Orthogonal array to ...

  1. Optimization of CNC end milling process parameters using PCA ...

    African Journals Online (AJOL)

    Optimization of CNC end milling process parameters using PCA-based Taguchi method. ... International Journal of Engineering, Science and Technology ... To meet the basic assumption of Taguchi method; in the present work, individual response correlations have been eliminated first by means of Principal Component ...

  2. Parameters optimization for magnetic resonance coupling wireless power transmission.

    Science.gov (United States)

    Li, Changsheng; Zhang, He; Jiang, Xiaohua

    2014-01-01

    Taking maximum power transmission and power stable transmission as research objectives, optimal design for the wireless power transmission system based on magnetic resonance coupling is carried out in this paper. Firstly, based on the mutual coupling model, mathematical expressions of optimal coupling coefficients for the maximum power transmission target are deduced. Whereafter, methods of enhancing power transmission stability based on parameters optimal design are investigated. It is found that the sensitivity of the load power to the transmission parameters can be reduced and the power transmission stability can be enhanced by improving the system resonance frequency or coupling coefficient between the driving/pick-up coil and the transmission/receiving coil. Experiment results are well conformed to the theoretical analysis conclusions.

  3. Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization

    Science.gov (United States)

    Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa

    2018-05-01

    In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.

  4. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  5. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  6. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    Science.gov (United States)

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  7. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  8. An optimized routing algorithm for the automated assembly of standard multimode ribbon fibers in a full-mesh optical backplane

    Science.gov (United States)

    Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene

    2018-03-01

    In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.

  9. Routing Optimization of Intelligent Vehicle in Automated Warehouse

    Directory of Open Access Journals (Sweden)

    Yan-cong Zhou

    2014-01-01

    Full Text Available Routing optimization is a key technology in the intelligent warehouse logistics. In order to get an optimal route for warehouse intelligent vehicle, routing optimization in complex global dynamic environment is studied. A new evolutionary ant colony algorithm based on RFID and knowledge-refinement is proposed. The new algorithm gets environmental information timely through the RFID technology and updates the environment map at the same time. It adopts elite ant kept, fallback, and pheromones limitation adjustment strategy. The current optimal route in population space is optimized based on experiential knowledge. The experimental results show that the new algorithm has higher convergence speed and can jump out the U-type or V-type obstacle traps easily. It can also find the global optimal route or approximate optimal one with higher probability in the complex dynamic environment. The new algorithm is proved feasible and effective by simulation results.

  10. Optimization of process parameters in precipitation for consistent quality UO{sub 2} powder production

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, S.K.; Reddy, A.L.V.; Venkataswamy, J.; Misra, M.; Setty, D.S.; Sheela, S.; Saibaba, N., E-mail: misra@nfc.gov.in [Nuclear Fuel Complex, Hyderabad (India)

    2013-07-01

    Nuclear reactor grade natural uranium dioxide powder is being produced through precipitation route, which is further processed before converting into sintered pellets used in the fabrication of PHWR fuel assemblies of 220 and 540 MWe type reactors. The process of precipitating Uranyl Nitrate Pure Solution (UNPS) is an important step in the UO{sub 2} powder production line, where in soluble uranium is transformed into solid form of Ammonium Uranate (AU), which in turn reflects and decides the powder characteristics. Precipitation of UNPS with vapour ammonia is being carried out in semi batch process and process parameters like ammonia flow rate, temperature, concentration of UNPS and free acidity of UNPS are very critical and decides the UO{sub 2} powder quality. Variation in these critical parameters influences powder characteristics, which in turn influences the sinterability of UO{sub 2} powder. In order to get consistent powder quality and sinterability the critical parameter like ammonia flow rate during precipitation is studied, optimized and validated. The critical process parameters are controlled through PLC based automated on-line data acquisition systems for achieving consistent powder quality with increased recovery and production. The present paper covers optimization of process parameters and powder characteristics. (author)

  11. Optimization of process parameters in precipitation for consistent quality UO2 powder production

    International Nuclear Information System (INIS)

    Tiwari, S.K.; Reddy, A.L.V.; Venkataswamy, J.; Misra, M.; Setty, D.S.; Sheela, S.; Saibaba, N.

    2013-01-01

    Nuclear reactor grade natural uranium dioxide powder is being produced through precipitation route, which is further processed before converting into sintered pellets used in the fabrication of PHWR fuel assemblies of 220 and 540 MWe type reactors. The process of precipitating Uranyl Nitrate Pure Solution (UNPS) is an important step in the UO 2 powder production line, where in soluble uranium is transformed into solid form of Ammonium Uranate (AU), which in turn reflects and decides the powder characteristics. Precipitation of UNPS with vapour ammonia is being carried out in semi batch process and process parameters like ammonia flow rate, temperature, concentration of UNPS and free acidity of UNPS are very critical and decides the UO 2 powder quality. Variation in these critical parameters influences powder characteristics, which in turn influences the sinterability of UO 2 powder. In order to get consistent powder quality and sinterability the critical parameter like ammonia flow rate during precipitation is studied, optimized and validated. The critical process parameters are controlled through PLC based automated on-line data acquisition systems for achieving consistent powder quality with increased recovery and production. The present paper covers optimization of process parameters and powder characteristics. (author)

  12. Optimization of virtual source parameters in neutron scattering instrumentation

    International Nuclear Information System (INIS)

    Habicht, K; Skoulatos, M

    2012-01-01

    We report on phase-space optimizations for neutron scattering instruments employing horizontal focussing crystal optics. Defining a figure of merit for a generic virtual source configuration we identify a set of optimum instrumental parameters. In order to assess the quality of the instrumental configuration we combine an evolutionary optimization algorithm with the analytical Popovici description using multidimensional Gaussian distributions. The optimum phase-space element which needs to be delivered to the virtual source by preceding neutron optics may be obtained using the same algorithm which is of general interest in instrument design.

  13. AI-guided parameter optimization in inverse treatment planning

    International Nuclear Information System (INIS)

    Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho

    2003-01-01

    An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach

  14. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  15. Identification of metabolic system parameters using global optimization methods

    Directory of Open Access Journals (Sweden)

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  16. Simulation-Based Optimization for Storage Allocation Problem of Outbound Containers in Automated Container Terminals

    Directory of Open Access Journals (Sweden)

    Ning Zhao

    2015-01-01

    Full Text Available Storage allocation of outbound containers is a key factor of the performance of container handling system in automated container terminals. Improper storage plans of outbound containers make QC waiting inevitable; hence, the vessel handling time will be lengthened. A simulation-based optimization method is proposed in this paper for the storage allocation problem of outbound containers in automated container terminals (SAPOBA. A simulation model is built up by Timed-Colored-Petri-Net (TCPN, used to evaluate the QC waiting time of storage plans. Two optimization approaches, based on Particle Swarm Optimization (PSO and Genetic Algorithm (GA, are proposed to form the complete simulation-based optimization method. Effectiveness of this method is verified by experiment, as the comparison of the two optimization approaches.

  17. Optimization of reserve lithium thionyl chloride battery electrochemical design parameters

    Energy Technology Data Exchange (ETDEWEB)

    Doddapaneni, N.; Godshall, N.A.

    1987-01-01

    The performance of Reserve Lithium Thionyl Chloride (RLTC) batteries was optimized by conducting a parametric study of seven electrochemical parameters: electrode compression, carbon thickness, presence of catalyst, temperature, electrode limitation, discharge rate, and electrolyte acidity. Increasing electrode compression (from 0 to 15%) improved battery performance significantly (10% greater carbon capacity density). Although thinner carbon cathodes yielded less absolute capacity than did thicker cathodes, they did so with considerably higher volume efficiencies. The effect of these parameters, and their synergistic interactions, on electrochemical cell peformance is illustrated. 5 refs., 9 figs., 3 tabs.

  18. Optimization of reserve lithium thionyl chloride battery electrochemical design parameters

    Science.gov (United States)

    Doddapaneni, N.; Godshall, N. A.

    The performance of Reserve Lithium Thionyl Chloride (RLTC) batteries was optimized by conducting a parametric study of seven electrochemical parameters: electrode compression, carbon thickness, presence of catalyst, temperature, electrode limitation, discharge rate, and electrolyte acidity. Increasing electrode compression (from 0 to 15 percent) improved battery performance significantly (10 percent greater carbon capacity density). Although thinner carbon cathodes yielded less absolute capacity than did thicker cathodes, they did so with considerably higher volume efficiencies. The effect of these parameters, and their synergistic interactions, on electrochemical cell performance is illustrated.

  19. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  20. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  1. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  2. Optimal construction parameters of electrosprayed trilayer organic photovoltaic devices

    International Nuclear Information System (INIS)

    Shah, S K; Ali, M; Gunnella, R; Abbas, M; Hirsch, L

    2014-01-01

    A detailed investigation of the optimal set of parameters employed in multilayer device fabrication obtained through successive electrospray deposited layers is reported. In this scheme, the donor/acceptor (D/A) bulk heterojunction layer is sandwiched between two thin stacked layers of individual donor and acceptor materials. The stacked layers geometry with optimal thicknesses plays a decisive role in improving operation characteristics. Among the parameters of the multilayer organic photovoltaics device, the D/A concentration ratio, blend thickness and stacking layers thicknesses are optimized. Other parameters, such as thermal annealing and the role of top metal contacts, are also discussed. Internal photon to current efficiency is found to attain a strong response in the 500 nm optical region for the most efficient device architectures. Such an observation indicates a clear interplay between photon harvesting of active layers and transport by ancillary stacking layers, opening up the possibility to engineer both the material fine structure and the device architecture to obtain the best photovoltaic response from a complex organic heterostructure. (paper)

  3. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  4. An Automated Analysis-Synthesis Package for Design Optimization ...

    African Journals Online (AJOL)

    90 standards is developed for the design optimization of framed structures - continuous beams, plane and space trusses and rigid frames, grids and composite truss-rigid frames. The package will enable the structural engineer to effectively and ...

  5. Optimal Machining Parameters for Achieving the Desired Surface Roughness in Turning of Steel

    Directory of Open Access Journals (Sweden)

    LB Abhang

    2012-06-01

    Full Text Available Due to the widespread use of highly automated machine tools in the metal cutting industry, manufacturing requires highly reliable models and methods for the prediction of output performance in the machining process. The prediction of optimal manufacturing conditions for good surface finish and dimensional accuracy plays a very important role in process planning. In the steel turning process the tool geometry and cutting conditions determine the time and cost of production which ultimately affect the quality of the final product. In the present work, experimental investigations have been conducted to determine the effect of the tool geometry (effective tool nose radius and metal cutting conditions (cutting speed, feed rate and depth of cut on surface finish during the turning of EN-31 steel. First and second order mathematical models are developed in terms of machining parameters by using the response surface methodology on the basis of the experimental results. The surface roughness prediction model has been optimized to obtain the surface roughness values by using LINGO solver programs. LINGO is a mathematical modeling language which is used in linear and nonlinear optimization to formulate large problems concisely, solve them, and analyze the solution in engineering sciences, operation research etc. The LINGO solver program is global optimization software. It gives minimum values of surface roughness and their respective optimal conditions.

  6. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  7. Structural parameter optimization design for Halbach permanent maglev rail

    International Nuclear Information System (INIS)

    Guo, F.; Tang, Y.; Ren, L.; Li, J.

    2010-01-01

    Maglev rail is an important part of the magnetic levitation launch system. Reducing the manufacturing cost of magnetic levitation rail is the key problem for the development of magnetic levitation launch system. The Halbach permanent array has an advantage that the fundamental spatial field is cancelled on one side of the array while the field on the other side is enhanced. So this array used in the design of high temperature superconducting permanent maglev rail could improve the surface magnetic field and the levitation force. In order to make the best use of Nd-Fe-B (NdFeB) material and reduce the cost of maglev rail, the effect of the rail's structural parameters on levitation force and the utilization rate of NdFeB material are analyzed. The optimal ranges of these structural parameters are obtained. The mutual impact of these parameters is also discussed. The optimization method of these structure parameters is proposed at the end of this paper.

  8. Structural parameter optimization design for Halbach permanent maglev rail

    Energy Technology Data Exchange (ETDEWEB)

    Guo, F., E-mail: guofang19830119@163.co [R and D Center of Applied Superconductivity, Huazhong University of Science and Technology, Wuhan 430074 (China); Tang, Y.; Ren, L.; Li, J. [R and D Center of Applied Superconductivity, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2010-11-01

    Maglev rail is an important part of the magnetic levitation launch system. Reducing the manufacturing cost of magnetic levitation rail is the key problem for the development of magnetic levitation launch system. The Halbach permanent array has an advantage that the fundamental spatial field is cancelled on one side of the array while the field on the other side is enhanced. So this array used in the design of high temperature superconducting permanent maglev rail could improve the surface magnetic field and the levitation force. In order to make the best use of Nd-Fe-B (NdFeB) material and reduce the cost of maglev rail, the effect of the rail's structural parameters on levitation force and the utilization rate of NdFeB material are analyzed. The optimal ranges of these structural parameters are obtained. The mutual impact of these parameters is also discussed. The optimization method of these structure parameters is proposed at the end of this paper.

  9. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  10. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  11. Designing a fully automated multi-bioreactor plant for fast DoE optimization of pharmaceutical protein production.

    Science.gov (United States)

    Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner

    2013-06-01

    The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Simple heuristics: A bridge between manual core design and automated optimization methods

    International Nuclear Information System (INIS)

    White, J.R.; Delmolino, P.M.

    1993-01-01

    The primary function of RESCUE is to serve as an aid in the analysis and identification of feasible loading patterns for LWR reload cores. The unique feature of RESCUE is that its physics model is based on some recent advances in generalized perturbation theory (GPT) methods. The high order GPT techniques offer the accuracy, computational efficiency, and flexibility needed for the implementation of a full range of capabilities within a set of compatible interactive (manual and semi-automated) and automated design tools. The basic design philosophy and current features within RESCUE are reviewed, and the new semi-automated capability is highlighted. The online advisor facility appears quite promising and it provides a natural bridge between the traditional trial-and-error manual process and the recent progress towards fully automated optimization sequences. (orig.)

  13. Density-based penalty parameter optimization on C-SVM.

    Science.gov (United States)

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  14. Optimization of dissolution process parameters for uranium ore concentrate powders

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Reddy, D.M.; Reddy, A.L.V.; Tiwari, S.K.; Venkataswamy, J.; Setty, D.S.; Sheela, S.; Saibaba, N. [Nuclear Fuel Complex, Hyderabad (India)

    2013-07-01

    Nuclear fuel complex processes Uranium Ore Concentrate (UOC) for producing uranium dioxide powder required for the fabrication of fuel assemblies for Pressurized Heavy Water Reactor (PHWR)s in India. UOC is dissolved in nitric acid and further purified by solvent extraction process for producing nuclear grade UO{sub 2} powder. Dissolution of UOC in nitric acid involves complex nitric oxide based reactions, since it is in the form of Uranium octa oxide (U{sub 3}O{sub 8}) or Uranium Dioxide (UO{sub 2}). The process kinetics of UOC dissolution is largely influenced by parameters like concentration and flow rate of nitric acid, temperature and air flow rate and found to have effect on recovery of nitric oxide as nitric acid. The plant scale dissolution of 2 MT batch in a single reactor is studied and observed excellent recovery of oxides of nitrogen (NO{sub x}) as nitric acid. The dissolution process is automated by PLC based Supervisory Control and Data Acquisition (SCADA) system for accurate control of process parameters and successfully dissolved around 200 Metric Tons of UOC. The paper covers complex chemistry involved in UOC dissolution process and also SCADA system. The solid and liquid reactions were studied along with multiple stoichiometry of nitrous oxide generated. (author)

  15. FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis.

    Directory of Open Access Journals (Sweden)

    Alex D Herbert

    Full Text Available Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.

  16. Reduced order modeling and parameter identification of a building energy system model through an optimization routine

    International Nuclear Information System (INIS)

    Harish, V.S.K.V.; Kumar, Arun

    2016-01-01

    Highlights: • A BES model based on 1st principles is developed and solved numerically. • Parameters of lumped capacitance model are fitted using the proposed optimization routine. • Validations are showed for different types of building construction elements. • Step response excitations for outdoor air temperature and relative humidity are analyzed. - Abstract: Different control techniques together with intelligent building technology (Building Automation Systems) are used to improve energy efficiency of buildings. In almost all control projects, it is crucial to have building energy models with high computational efficiency in order to design and tune the controllers and simulate their performance. In this paper, a set of partial differential equations are formulated accounting for energy flow within the building space. These equations are then solved as conventional finite difference equations using Crank–Nicholson scheme. Such a model of a higher order is regarded as a benchmark model. An optimization algorithm has been developed, depicted through a flowchart, which minimizes the sum squared error between the step responses of the numerical and the optimal model. Optimal model of the construction element is nothing but a RC-network model with the values of Rs and Cs estimated using the non-linear time invariant constrained optimization routine. The model is validated with comparing the step responses with other two RC-network models whose parameter values are selected based on a certain criteria. Validations are showed for different types of building construction elements viz., low, medium and heavy thermal capacity elements. Simulation results show that the optimal model closely follow the step responses of the numerical model as compared to the responses of other two models.

  17. Optimization of plasma flow parameters of the magnetoplasma compressor

    International Nuclear Information System (INIS)

    Dojcinovic, I P; Kuraica, M M; Obradovc, B M; Cvetanovic, N; Puric, J

    2007-01-01

    Optimization of the working conditions of the magnetoplasma compressor (MPC) has been performed through analysing discharge and compression plasma flow parameters in hydrogen, nitrogen and argon at different pressures. Energy conversion rate, volt-ampere curve exponent and plasma flow velocities have been studied to optimize the efficiency of energy transfer from the supply source to the plasma. It has been found that the most effective energy transfer from the supply to the plasma is in hydrogen as a working gas at 1000 Pa pressure. It was found that the accelerating regime exists for hydrogen up to 3000 Pa pressures, in nitrogen up to 2000 Pa and in argon up to 1000 Pa pressure. At higher pressures MPC in all the gases works in the decelerating regime. At pressures lower than 200 Pa, high cathode erosion is observed. MPC plasma flow parameter optimization is very important because this plasma accelerating system may be of special interest for solid surface modification and other technology applications

  18. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  19. Optimizing the response to surveillance alerts in automated surveillance systems.

    Science.gov (United States)

    Izadi, Masoumeh; Buckeridge, David L

    2011-02-28

    Although much research effort has been directed toward refining algorithms for disease outbreak alerting, considerably less attention has been given to the response to alerts generated from statistical detection algorithms. Given the inherent inaccuracy in alerting, it is imperative to develop methods that help public health personnel identify optimal policies in response to alerts. This study evaluates the application of dynamic decision making models to the problem of responding to outbreak detection methods, using anthrax surveillance as an example. Adaptive optimization through approximate dynamic programming is used to generate a policy for decision making following outbreak detection. We investigate the degree to which the model can tolerate noise theoretically, in order to keep near optimal behavior. We also evaluate the policy from our model empirically and compare it with current approaches in routine public health practice for investigating alerts. Timeliness of outbreak confirmation and total costs associated with the decisions made are used as performance measures. Using our approach, on average, 80 per cent of outbreaks were confirmed prior to the fifth day of post-attack with considerably less cost compared to response strategies currently in use. Experimental results are also provided to illustrate the robustness of the adaptive optimization approach and to show the realization of the derived error bounds in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Optimization of geometric parameters of heat exchange pipes pin finning

    Science.gov (United States)

    Akulov, K. A.; Golik, V. V.; Voronin, K. S.; Zakirzakov, A. G.

    2018-05-01

    The work is devoted to optimization of geometric parameters of the pin finning of heat-exchanging pipes. Pin fins were considered from the point of view of mechanics of a deformed solid body as overhang beams with a uniformly distributed load. It was found out under what geometric parameters of the nib (diameter and length); the stresses in it from the influence of the washer fluid will not exceed the yield strength of the material (aluminum). Optimal values of the geometric parameters of nibs were obtained for different velocities of the medium washed by them. As a flow medium, water and air were chosen, and the cross section of the nibs was round and square. Pin finning turned out to be more than 3 times more compact than circumferential finning, so its use makes it possible to increase the number of fins per meter of the heat-exchanging pipe. And it is well-known that this is the main method for increasing the heat transfer of a convective surface, giving them an indisputable advantage.

  1. Optimization of vibratory welding process parameters using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Pravin Kumar; Kumar, S. Deepak; Patel, D.; Prasad, S. B. [National Institute of Technology Jamshedpur, Jharkhand (India)

    2017-05-15

    The current investigation was carried out to study the effect of vibratory welding technique on mechanical properties of 6 mm thick butt welded mild steel plates. A new concept of vibratory welding technique has been designed and developed which is capable to transfer vibrations, having resonance frequency of 300 Hz, into the molten weld pool before it solidifies during the Shielded metal arc welding (SMAW) process. The important process parameters of vibratory welding technique namely welding current, welding speed and frequency of the vibrations induced in molten weld pool were optimized using Taguchi’s analysis and Response surface methodology (RSM). The effect of process parameters on tensile strength and hardness were evaluated using optimization techniques. Applying RSM, the effect of vibratory welding parameters on tensile strength and hardness were obtained through two separate regression equations. Results showed that, the most influencing factor for the desired tensile strength and hardness is frequency at its resonance value, i.e. 300 Hz. The micro-hardness and microstructures of the vibratory welded joints were studied in detail and compared with those of conventional SMAW joints. Comparatively, uniform and fine grain structure has been found in vibratory welded joints.

  2. Optimization of cutting parameters for machining time in turning process

    Science.gov (United States)

    Mavliutov, A. R.; Zlotnikov, E. G.

    2018-03-01

    This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.

  3. Optimal CT scanning parameters for commonly used tumor ablation applicators

    International Nuclear Information System (INIS)

    Eltorai, Adam E.M.; Baird, Grayson L.; Monu, Nicholas; Wolf, Farrah; Seidler, Michael; Collins, Scott; Kim, Jeomsoon; Dupuy, Damian E.

    2017-01-01

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  4. Optimal CT scanning parameters for commonly used tumor ablation applicators

    Energy Technology Data Exchange (ETDEWEB)

    Eltorai, Adam E.M. [Warren Alpert Medical School of Brown University (United States); Baird, Grayson L. [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Lifespan Biostatistics Core (United States); Rhode Island Hospital (United States); Monu, Nicholas; Wolf, Farrah; Seidler, Michael [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States); Collins, Scott [Department of Diagnostic Imaging (United States); Rhode Island Hospital (United States); Kim, Jeomsoon [Department of Medical Physics (United States); Rhode Island Hospital (United States); Dupuy, Damian E., E-mail: ddupuy@comcast.net [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States)

    2017-04-15

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  5. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  6. Automated Structural Optimization System (ASTROS). Volume 1. Theoretical Manual

    Science.gov (United States)

    1988-12-01

    corresponding frequency list are given by Equation C-9. The second set of parameters is the frequency list used in solving Equation C-3 to obtain the response...vector (u(w)). This frequency list is: w - 2*fo, 2wfi, 2wf2, 2wfn (C-20) The frequency lists (^ and w are not necessarily equal. While setting...alternative methods are used to input the frequency list u. For the first method, the frequency list u is input via two parameters: Aff (C-21

  7. Optimization of some electrochemical etching parameters for cellulose derivatives

    International Nuclear Information System (INIS)

    Chowdhury, Annis; Gammage, R.B.

    1978-01-01

    Electrochemical etching of fast neutron induced recoil particle tracks in cellulose derivatives and other polymers provides an inexpensive and sensitive means of fast neutron personnel dosimetry. A study of the shape, clarity, and size of the tracks in Transilwrap polycarbonate indicated that the optimum normality of the potassium hydroxide etching solution is 9 N. Optimizations have also been attempted for cellulose nitrate, triacetate, and acetobutyrate with respect to such electrochemical etching parameters as frequency, voltage gradient, and concentration of the etching solution. The measurement of differential leakage currents between the undamaged and the neutron damaged foils aided in the selection of optimum frequencies. (author)

  8. Robust Optimization for Household Load Scheduling with Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Jidong Wang

    2018-04-01

    Full Text Available Home energy management systems (HEMS face many challenges of uncertainty, which have a great impact on the scheduling of home appliances. To handle the uncertain parameters in the household load scheduling problem, this paper uses a robust optimization method to rebuild the household load scheduling model for home energy management. The model proposed in this paper can provide the complete robust schedules for customers while considering the disturbance of uncertain parameters. The complete robust schedules can not only guarantee the customers’ comfort constraints but also cooperatively schedule the electric devices for cost minimization and load shifting. Moreover, it is available for customers to obtain multiple schedules through setting different robust levels while considering the trade-off between the comfort and economy.

  9. AMMOS: Automated Molecular Mechanics Optimization tool for in silico Screening

    Directory of Open Access Journals (Sweden)

    Pajeva Ilza

    2008-10-01

    Full Text Available Abstract Background Virtual or in silico ligand screening combined with other computational methods is one of the most promising methods to search for new lead compounds, thereby greatly assisting the drug discovery process. Despite considerable progresses made in virtual screening methodologies, available computer programs do not easily address problems such as: structural optimization of compounds in a screening library, receptor flexibility/induced-fit, and accurate prediction of protein-ligand interactions. It has been shown that structural optimization of chemical compounds and that post-docking optimization in multi-step structure-based virtual screening approaches help to further improve the overall efficiency of the methods. To address some of these points, we developed the program AMMOS for refining both, the 3D structures of the small molecules present in chemical libraries and the predicted receptor-ligand complexes through allowing partial to full atom flexibility through molecular mechanics optimization. Results The program AMMOS carries out an automatic procedure that allows for the structural refinement of compound collections and energy minimization of protein-ligand complexes using the open source program AMMP. The performance of our package was evaluated by comparing the structures of small chemical entities minimized by AMMOS with those minimized with the Tripos and MMFF94s force fields. Next, AMMOS was used for full flexible minimization of protein-ligands complexes obtained from a mutli-step virtual screening. Enrichment studies of the selected pre-docked complexes containing 60% of the initially added inhibitors were carried out with or without final AMMOS minimization on two protein targets having different binding pocket properties. AMMOS was able to improve the enrichment after the pre-docking stage with 40 to 60% of the initially added active compounds found in the top 3% to 5% of the entire compound collection

  10. SU-G-TeP4-08: Automating the Verification of Patient Treatment Parameters

    Energy Technology Data Exchange (ETDEWEB)

    DiCostanzo, D; Ayan, A; Woollard, J; Gupta, N [The Ohio State University, Columbus, OH (United States)

    2016-06-15

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate via a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.

  11. EVALUATION OF ANAEMIA USING RED CELL AND RETICULOCYTE PARAMETERS USING AUTOMATED HAEMATOLOGY ANALYSER

    Directory of Open Access Journals (Sweden)

    Vidyadhar Rao

    2016-06-01

    Full Text Available Use of current models of Automated Haematology Analysers help in calculating the haemoglobin contents of the mature Red cells, Reticulocytes and percentages of Microcytic and hypochromic Red cells. This has helped the clinician in reaching early diagnosis and management of Different haemopoietic disorders like Iron Deficiency Anaemia, Thalassaemia and anaemia of chronic diseases. AIM This study is conducted using an Automated Haematology Analyser to evaluate anaemia using the Red Cell and Reticulocyte parameters. Three types of anaemia were evaluated; iron deficiency anaemia, anaemia of long duration and anaemia associated with chronic disease and Iron deficiency. MATERIALS AND METHODS The blood samples were collected from 287 adult patients with anaemia differentiated depending upon their iron status, haemoglobinopathies and inflammatory activity. Iron deficiency anaemia (n=132, anaemia of long duration (ACD, (n=97 and anaemia associated with chronic disease with iron deficiency (ACD Combi, (n=58. Microcytic Red cells, hypochromic red cells percentage and levels of haemoglobin in reticulocytes and matured RBCs were calculated. The accuracy of the parameters was analysed using receiver operating characteristic analyser to differentiate between the types of anaemia. OBSERVATIONS AND RESULTS There was no difference in parameters between the iron deficiency group or anaemia associated with chronic disease and iron deficiency. The hypochromic red cells percentage was the best parameter in differentiating anaemia of chronic disease with or without absolute iron deficiency with a sensitivity of 72.7% and a specificity of 70.4%. CONCLUSIONS The parameters of red cells and reticulocytes were of reasonably good indicators in differentiating the absolute iron deficiency anaemia with chronic disease.

  12. Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO

    Directory of Open Access Journals (Sweden)

    Adel Taieb

    2017-01-01

    Full Text Available This paper introduces a new development for designing a Multi-Input Multi-Output (MIMO Fuzzy Optimal Model Predictive Control (FOMPC using the Adaptive Particle Swarm Optimization (APSO algorithm. The aim of this proposed control, called FOMPC-APSO, is to develop an efficient algorithm that is able to have good performance by guaranteeing a minimal control. This is done by determining the optimal weights of the objective function. Our method is considered an optimization problem based on the APSO algorithm. The MIMO system to be controlled is modeled by a Takagi-Sugeno (TS fuzzy system whose parameters are identified using weighted recursive least squares method. The utility of the proposed controller is demonstrated by applying it to two nonlinear processes, Continuous Stirred Tank Reactor (CSTR and Tank system, where the proposed approach provides better performances compared with other methods.

  13. Optimization and automation of quantitative NMR data extraction.

    Science.gov (United States)

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  14. Acoustical characterization and parameter optimization of polymeric noise control materials

    Science.gov (United States)

    Homsi, Emile N.

    2003-10-01

    The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.

  15. Automated electrochemical assembly of the protected potential TMG-chitotriomycin precursor based on rational optimization of the carbohydrate building block.

    Science.gov (United States)

    Nokami, Toshiki; Isoda, Yuta; Sasaki, Norihiko; Takaiso, Aki; Hayase, Shuichi; Itoh, Toshiyuki; Hayashi, Ryutaro; Shimizu, Akihiro; Yoshida, Jun-ichi

    2015-03-20

    The anomeric arylthio group and the hydroxyl-protecting groups of thioglycosides were optimized to construct carbohydrate building blocks for automated electrochemical solution-phase synthesis of oligoglucosamines having 1,4-β-glycosidic linkages. The optimization study included density functional theory calculations, measurements of the oxidation potentials, and the trial synthesis of the chitotriose trisaccharide. The automated synthesis of the protected potential N,N,N-trimethyl-d-glucosaminylchitotriomycin precursor was accomplished by using the optimized building block.

  16. Automated gamma knife radiosurgery treatment planning with image registration, data-mining, and Nelder-Mead simplex optimization

    International Nuclear Information System (INIS)

    Lee, Kuan J.; Barber, David C.; Walton, Lee

    2006-01-01

    Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes ( 30 000 cubic mm,>20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically

  17. Automated capacitive spectrometer for measuring the parameters of deep centers in semiconductor materials

    International Nuclear Information System (INIS)

    Shajmeev, S.S.

    1985-01-01

    An automated capacitive spectrometer for determining deep centers parameters in semiconductor materials and instruments is described. The facility can be used in studying electrically active defects (impurity, radiation, thermal) having deep levels in the forbidden semiconductor zone. The facility permits to determine the following parameters of the deep centers: concentration of each deep level taken separately within 5x10 -1 +-5x10 -15 of the alloying impurity concentration, level energy position in the forbidden semiconductor zone in the range from 0.08 MeV above the valency zone ceiling to 0.08 eV below the conductivity zone bottom, hole or electron capture cross-section on the deep center; concentration profile of deep levels

  18. Automated extraction and validation of children's gait parameters with the Kinect.

    Science.gov (United States)

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  19. Experimental optimization of a direct injection homogeneous charge compression ignition gasoline engine using split injections with fully automated microgenetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Canakci, M. [Kocaeli Univ., Izmit (Turkey); Reitz, R.D. [Wisconsin Univ., Dept. of Mechanical Engineering, Madison, WI (United States)

    2003-03-01

    Homogeneous charge compression ignition (HCCI) is receiving attention as a new low-emission engine concept. Little is known about the optimal operating conditions for this engine operation mode. Combustion under homogeneous, low equivalence ratio conditions results in modest temperature combustion products, containing very low concentrations of NO{sub x} and particulate matter (PM) as well as providing high thermal efficiency. However, this combustion mode can produce higher HC and CO emissions than those of conventional engines. An electronically controlled Caterpillar single-cylinder oil test engine (SCOTE), originally designed for heavy-duty diesel applications, was converted to an HCCI direct injection (DI) gasoline engine. The engine features an electronically controlled low-pressure direct injection gasoline (DI-G) injector with a 60 deg spray angle that is capable of multiple injections. The use of double injection was explored for emission control and the engine was optimized using fully automated experiments and a microgenetic algorithm optimization code. The variables changed during the optimization include the intake air temperature, start of injection timing and the split injection parameters (per cent mass of fuel in each injection, dwell between the pulses). The engine performance and emissions were determined at 700 r/min with a constant fuel flowrate at 10 MPa fuel injection pressure. The results show that significant emissions reductions are possible with the use of optimal injection strategies. (Author)

  20. Optimal Sensor Networks Scheduling in Identification of Distributed Parameter Systems

    CERN Document Server

    Patan, Maciej

    2012-01-01

    Sensor networks have recently come into prominence because they hold the potential to revolutionize a wide spectrum of both civilian and military applications. An ingenious characteristic of sensor networks is the distributed nature of data acquisition. Therefore they seem to be ideally prepared for the task of monitoring processes with spatio-temporal dynamics which constitute one of most general and important classes of systems in modelling of the real-world phenomena. It is clear that careful deployment and activation of sensor nodes are critical for collecting the most valuable information from the observed environment. Optimal Sensor Network Scheduling in Identification of Distributed Parameter Systems discusses the characteristic features of the sensor scheduling problem, analyzes classical and recent approaches, and proposes a wide range of original solutions, especially dedicated for networks with mobile and scanning nodes. Both researchers and practitioners will find the case studies, the proposed al...

  1. Microbial alkaline proteases: Optimization of production parameters and their properties

    Directory of Open Access Journals (Sweden)

    Kanupriya Miglani Sharma

    2017-06-01

    Full Text Available Proteases are hydrolytic enzymes capable of degrading proteins into small peptides and amino acids. They account for nearly 60% of the total industrial enzyme market. Proteases are extensively exploited commercially, in food, pharmaceutical, leather and detergent industry. Given their potential use, there has been renewed interest in the discovery of proteases with novel properties and a constant thrust to optimize the enzyme production. This review summarizes a fraction of the enormous reports available on various aspects of alkaline proteases. Diverse sources for isolation of alkaline protease producing microorganisms are reported. The various nutritional and environmental parameters affecting the production of alkaline proteases in submerged and solid state fermentation are described. The enzymatic and physicochemical properties of alkaline proteases from several microorganisms are discussed which can help to identify enzymes with high activity and stability over extreme pH and temperature, so that they can be developed for industrial applications.

  2. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  3. OPTIMIZATION OF OPERATION PARAMETERS OF 80-KEV ELECTRON GUN

    Directory of Open Access Journals (Sweden)

    JEONG DONG KIM

    2014-06-01

    As a first step, the electron generator of an 80-keV electron gun was manufactured. In order to produce the high beam power from electron linear accelerator, a proper beam current is required form the electron generator. In this study, the beam current was measured by evaluating the performance of the electron generator. The beam current was determined by five parameters: high voltage at the electron gun, cathode voltage, pulse width, pulse amplitude, and bias voltage at the grid. From the experimental results under optimal conditions, the high voltage was determined to be 80 kV, the pulse width was 500 ns, and the cathode voltage was from 4.2 V to 4.6 V. The beam current was measured as 1.9 A at maximum. These results satisfy the beam current required for the operation of an electron linear accelerator.

  4. High Temperature Epoxy Foam: Optimization of Process Parameters

    Directory of Open Access Journals (Sweden)

    Samira El Gazzani

    2016-06-01

    Full Text Available For many years, reduction of fuel consumption has been a major aim in terms of both costs and environmental concerns. One option is to reduce the weight of fuel consumers. For this purpose, the use of a lightweight material based on rigid foams is a relevant choice. This paper deals with a new high temperature epoxy expanded material as substitution of phenolic resin, classified as potentially mutagenic by European directive Reach. The optimization of thermoset foam depends on two major parameters, the reticulation process and the expansion of the foaming agent. Controlling these two phenomena can lead to a fully expanded and cured material. The rheological behavior of epoxy resin is studied and gel time is determined at various temperatures. The expansion of foaming agent is investigated by thermomechanical analysis. Results are correlated and compared with samples foamed in the same temperature conditions. The ideal foaming/gelation temperature is then determined. The second part of this research concerns the optimization of curing cycle of a high temperature trifunctional epoxy resin. A two-step curing cycle was defined by considering the influence of different curing schedules on the glass transition temperature of the material. The final foamed material has a glass transition temperature of 270 °C.

  5. Laser Processing of Multilayered Thermal Spray Coatings: Optimal Processing Parameters

    Science.gov (United States)

    Tewolde, Mahder; Zhang, Tao; Lee, Hwasoo; Sampath, Sanjay; Hwang, David; Longtin, Jon

    2017-12-01

    Laser processing offers an innovative approach for the fabrication and transformation of a wide range of materials. As a rapid, non-contact, and precision material removal technology, lasers are natural tools to process thermal spray coatings. Recently, a thermoelectric generator (TEG) was fabricated using thermal spray and laser processing. The TEG device represents a multilayer, multimaterial functional thermal spray structure, with laser processing serving an essential role in its fabrication. Several unique challenges are presented when processing such multilayer coatings, and the focus of this work is on the selection of laser processing parameters for optimal feature quality and device performance. A parametric study is carried out using three short-pulse lasers, where laser power, repetition rate and processing speed are varied to determine the laser parameters that result in high-quality features. The resulting laser patterns are characterized using optical and scanning electron microscopy, energy-dispersive x-ray spectroscopy, and electrical isolation tests between patterned regions. The underlying laser interaction and material removal mechanisms that affect the feature quality are discussed. Feature quality was found to improve both by using a multiscanning approach and an optional assist gas of air or nitrogen. Electrically isolated regions were also patterned in a cylindrical test specimen.

  6. Real parameter optimization by an effective differential evolution algorithm

    Directory of Open Access Journals (Sweden)

    Ali Wagdy Mohamed

    2013-03-01

    Full Text Available This paper introduces an Effective Differential Evolution (EDE algorithm for solving real parameter optimization problems over continuous domain. The proposed algorithm proposes a new mutation rule based on the best and the worst individuals among the entire population of a particular generation. The mutation rule is combined with the basic mutation strategy through a linear decreasing probability rule. The proposed mutation rule is shown to promote local search capability of the basic DE and to make it faster. Furthermore, a random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme are merged to avoid stagnation and/or premature convergence. Additionally, the scaling factor and crossover of DE are introduced as uniform random numbers to enrich the search behavior and to enhance the diversity of the population. The effectiveness and benefits of the proposed modifications used in EDE has been experimentally investigated. Numerical experiments on a set of bound-constrained problems have shown that the new approach is efficient, effective and robust. The comparison results between the EDE and several classical differential evolution methods and state-of-the-art parameter adaptive differential evolution variants indicate that the proposed EDE algorithm is competitive with , and in some cases superior to, other algorithms in terms of final solution quality, efficiency, convergence rate, and robustness.

  7. OPTIMIZATION OF DYEING PARAMETERS TO DYE COTTON WITH CARROT EXTRACTION

    Directory of Open Access Journals (Sweden)

    MIRALLES Verónica

    2017-05-01

    Full Text Available Natural dyes derived from flora and fauna are believed to be safe because of non-toxic, non-carcinogenic and biodegradable nature. Furthermore, natural dyes do not cause pollution and waste water problems. Natural dyes as well as synthetic dyes need the optimum parameters to get a good dyeing. On some occasions, It is necessary the use of mordants to increase the affinity between cellulose fiber and natural dye, but there are other conditions to optimize in the dyeing process, like time, temperature, auxiliary porducts, etc. In addition, the optimum conditions are different depends on the type of dye and the fiber nature. The aim of this work is the use of carrot extract to dye cotton fabric by exhaustion at diverse dyeing conditions. Diffferent dyeing processes were carried out to study the effect of pH condition and the temperature, using 7, 6 and 4 pH values and 95 ºC and 130ºC for an hour. As a result some images of dyed samples are shown. Moreover, to evaluate the colour of each sample CIELAB parameters are analysed obtained by reflexion spectrophotometre. The results showed that the temperature used has an important influence on the colour of the dyed sample.

  8. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  9. OPTIMIZATION OF HEMISPHERICAL RESONATOR GYROSCOPE STANDING WAVE PARAMETERS

    Directory of Open Access Journals (Sweden)

    Olga Sergeevna Khalyutina

    2017-01-01

    Full Text Available Traditionally, the problem of autonomous navigation is solved by dead reckoning navigation flight parameters (NFP of the aircraft (AC. With increasing requirements to accuracy of definition NFP improved the sensors of the prima- ry navigation information: gyroscopes and accelerometers. the gyroscopes of a new type, the so-called solid-state wave gyroscopes (SSVG are currently developed and put into practice. The work deals with the problem of increasing the accu- racy of measurements of angular velocity of the hemispherical resonator gyroscope (HRG. The reduction in the accuracy characteristics of HRG is caused by the presence of defects in the distribution of mass in the volume of its design. The syn- thesis of control system for optimal damping of the distortion parameters of the standing wave due to the influence of the mass defect resonator is adapted. The research challenge was: to examine and analytically offset the impact of the standing wave (amplitude and frequency parameters defect. Research was performed by mathematical modeling in the environment of SolidWorks Simulation for the case when the characteristics of the sensitive element of the HRG met the technological drawings of a particular type of resonator. The method of the inverse dynamics was chosen for synthesis. The research re- sults are presented in graphs the amplitude-frequency characteristics (AFC of the resonator output signal. Simulation was performed for the cases: the perfect distribution of weight; the presence of the mass defect; the presence of the mass defects are shown using the synthesized control action. Evaluating the effectiveness of the proposed control algorithm is deter- mined by the results of the resonator output signal simulation provided the perfect constructive and its performance in the presence of a mass defect in it. It is assumed that the excitation signals are standing waves in the two cases are identical in both amplitude and frequency. In this

  10. A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology.

    Science.gov (United States)

    Al-Fahdawi, Shumoos; Qahwaji, Rami; Al-Waisy, Alaa S; Ipson, Stanley; Ferdousi, Maryam; Malik, Rayaz A; Brahma, Arun

    2018-07-01

    Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Choice of optimal parameters for the superconductive quantum magnetometer

    Energy Technology Data Exchange (ETDEWEB)

    Vasiliev, B V; Ivanenko, A I; Trofimov, V N

    1974-12-31

    The problem of choosing the optimal coupling coefficient and optimal working frequency for superconductive quantum magnetometer is considered. The present experimental signalnoise dependence confirms the drawn conclusions. (auth)

  12. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  13. Automation of data processing and calculation of retention parameters and thermodynamic data for gas chromatography

    Science.gov (United States)

    Makarycheva, A. I.; Faerman, V. A.

    2017-02-01

    The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.

  14. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  15. Automated processing of first-pass radioisotope ventriculography data to determine essential central circulation parameters

    Science.gov (United States)

    Krotov, Aleksei; Pankin, Victor

    2017-09-01

    The assessment of central circulation (including heart function) parameters is vital in the preventive diagnostics of inherent and acquired heart failures and during polychemotherapy. The protocols currently applied in Russia do not fully utilize the first-pass assessment (FPRNA) and that results in poor data formalization, while the FPRNA is the one of the fastest, affordable and compact methods among other radioisotope diagnostics protocols. A non-imaging algorithm basing on existing protocols has been designed to use the readings of an additional detector above vena subclavia to determine the total blood volume (TBV), not requiring blood sampling in contrast to current protocols. An automated processing of precordial detector readings is presented, in order to determine the heart strike volume (SV). Two techniques to estimate the ejection fraction (EF) of the heart are discussed.

  16. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    Science.gov (United States)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  17. The optimal number, type and location of devices in automation of electrical distribution networks

    Directory of Open Access Journals (Sweden)

    Popović Željko N.

    2015-01-01

    Full Text Available This paper presents the mixed integer linear programming based model for determining optimal number, type and location of remotely controlled and supervised devices in distribution networks in the presence of distributed generators. The proposed model takes into consideration a number of different devices simultaneously (remotely controlled circuit breakers/reclosers, sectionalizing switches, remotely supervised and local fault passage indicators along with the following: expected outage cost to consumers and producers due to momentary and long-term interruptions, automated device expenses (capital investment, installation, and annual operation and maintenance costs, number and expenses of crews involved in the isolation and restoration process. Furthermore, the other possible benefits of each of automated device are also taken into account (e.g., benefits due to decreasing the cost of switching operations in normal conditions. The obtained numerical results emphasize the importance of consideration of different types of automation devices simultaneously. They also show that the proposed approach have a potential to improve the process of determining of the best automation strategy in real life distribution networks.

  18. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    Science.gov (United States)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  19. Automated scheme to determine design parameters for a recoverable reentry vehicle

    International Nuclear Information System (INIS)

    Williamson, W.E.

    1976-01-01

    The NRV (Nosetip Recovery Vehicle) program at Sandia Laboratories is designed to recover the nose section from a sphere cone reentry vehicle after it has flown a near ICBM reentry trajectory. Both mass jettison and parachutes are used to reduce the velocity of the RV near the end of the trajectory to a sufficiently low level that the vehicle may land intact. The design problem of determining mass jettison time and parachute deployment time in order to ensure that the vehicle does land intact is considered. The problem is formulated as a min-max optimization problem where the design parameters are to be selected to minimize the maximum possible deviation in the design criteria due to uncertainties in the system. The results of the study indicate that the optimal choice of the design parameters ensures that the maximum deviation in the design criteria is within acceptable bounds. This analytically ensures the feasibility of recovery for NRV

  20. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  1. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  2. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  3. Network synthesis and parameter optimization for vehicle suspension with inerter

    Directory of Open Access Journals (Sweden)

    Long Chen

    2016-12-01

    Full Text Available In order to design a comfortable-oriented vehicle suspension structure, the network synthesis method was utilized to transfer the problem into solving a timing robust control problem and determine the structure of “inerter–spring–damper” suspension. Bilinear Matrix Inequality was utilized to obtain the timing transfer function. Then, the transfer function of suspension system can be physically implemented by passive elements such as spring, damper, and inerter. By analyzing the sensitivity and quantum genetic algorithm, the optimized parameters of inerter–spring–damper suspension were determined. A quarter-car model was established. The performance of the inerter–spring–damper suspension was verified under random input. The simulation results manifested that the dynamic performance of the proposed suspension was enhanced in contrast with traditional suspension. The root mean square of vehicle body acceleration decreases by 18.9%. The inerter–spring–damper suspension can inhibit the vertical vibration within the frequency of 1–3 Hz effectively and enhance the performance of ride comfort significantly.

  4. Optimizing gelling parameters of gellan gum for fibrocartilage tissue engineering.

    Science.gov (United States)

    Lee, Haeyeon; Fisher, Stephanie; Kallos, Michael S; Hunter, Christopher J

    2011-08-01

    Gellan gum is an attractive biomaterial for fibrocartilage tissue engineering applications because it is cell compatible, can be injected into a defect, and gels at body temperature. However, the gelling parameters of gellan gum have not yet been fully optimized. The aim of this study was to investigate the mechanics, degradation, gelling temperature, and viscosity of low acyl and low/high acyl gellan gum blends. Dynamic mechanical analysis showed that increased concentrations of low acyl gellan gum resulted in increased stiffness and the addition of high acyl gellan gum resulted in greatly decreased stiffness. Degradation studies showed that low acyl gellan gum was more stable than low/high acyl gellan gum blends. Gelling temperature studies showed that increased concentrations of low acyl gellan gum and CaCl₂ increased gelling temperature and low acyl gellan gum concentrations below 2% (w/v) would be most suitable for cell encapsulation. Gellan gum blends were generally found to have a higher gelling temperature than low acyl gellan gum. Viscosity studies showed that increased concentrations of low acyl gellan gum increased viscosity. Our results suggest that 2% (w/v) low acyl gellan gum would have the most appropriate mechanics, degradation, and gelling temperature for use in fibrocartilage tissue engineering applications. Copyright © 2011 Wiley Periodicals, Inc.

  5. Optimal Design of Variable Stiffness Composite Structures using Lamination Parameters

    NARCIS (Netherlands)

    IJsselmuiden, S.T.

    2011-01-01

    Fiber reinforced composite materials have gained widespread acceptance for a multitude of applications in the aerospace, automotive, maritime and wind-energy industries. Automated fiber placement technologies have developed rapidly over the past two decades, driven primarily by a need to reduce

  6. A choice of the parameters of NPP steam generators on the basis of vector optimization

    International Nuclear Information System (INIS)

    Lemeshev, V.U.; Metreveli, D.G.

    1981-01-01

    The optimization problem of the parameters of the designed systems is considered as the problem of multicriterion optimization. It is proposed to choose non-dominant, optimal according to Pareto, parameters. An algorithm is built on the basis of the required and sufficient non-dominant conditions to find non-dominant solutions. This algorithm has been employed to solve the problem on a choice of optimal parameters for the counterflow shell-tube steam generator of NPP of BRGD type [ru

  7. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  8. Optimization of process parameters for spark plasma sintering of nano structured SAF 2205 composite

    Directory of Open Access Journals (Sweden)

    Samuel Ranti Oke

    2018-04-01

    Full Text Available This research optimized spark plasma sintering (SPS process parameters in terms of sintering temperature, holding time and heating rate for the development of a nano-structured duplex stainless steel (SAF 2205 grade reinforced with titanium nitride (TiN. The mixed powders were sintered using an automated spark plasma sintering machine (model HHPD-25, FCT GmbH, Germany. Characterization was performed using X-ray diffraction and scanning electron microscopy. Density and hardness of the composites were investigated. The XRD result showed the formation of FeN0.068. SEM/EDS revealed the presence of nano ranged particles of TiN segregated at the grain boundaries of the duplex matrix. A decrease in hardness and densification was observed when sintering temperature and heating rate were 1200 °C and 150 °C/min respectively. The optimum properties were obtained in composites sintered at 1150 °C for 15 min and 100 °C/min. The composite grades irrespective of the process parameters exhibited similar shrinkage behavior, which is characterized by three distinctive peaks, which is an indication of good densification phenomena. Keywords: Spark plasma sintering, Duplex stainless steel (SAF 2205, Titanium nitride (TiN, Microstructure, Density, Hardness

  9. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    Science.gov (United States)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  10. Sensitivity analysis of reactor safety parameters with automated adjoint function generation

    International Nuclear Information System (INIS)

    Kallfelz, J.M.; Horwedel, J.E.; Worley, B.A.

    1992-01-01

    A project at the Paul Scherrer Institute (PSI) involves the development of simulation models for the transient analysis of the reactors in Switzerland (STARS). This project, funded in part by the Swiss Federal Nuclear Safety Inspectorate, also involves the calculation and evaluation of certain transients for Swiss light water reactors (LWRs). For best-estimate analyses, a key element in quantifying reactor safety margins is uncertainty evaluation to determine the uncertainty in calculated integral values (responses) caused by modeling, calculational methodology, and input data (parameters). The work reported in this paper is a joint PSI/Oak Ridge National Laboratory (ORNL) application to a core transient analysis code of an ORNL software system for automated sensitivity analysis. The Gradient-Enhanced Software System (GRESS) is a software package that can in principle enhance any code so that it can calculate the sensitivity (derivative) to input parameters of any integral value (response) calculated in the original code. The studies reported are the first application of the GRESS capability to core neutronics and safety codes

  11. GAUFRE: A tool for an automated determination of atmospheric parameters from spectroscopy

    Directory of Open Access Journals (Sweden)

    Fossati L.

    2013-03-01

    Full Text Available We present an automated tool for measuring atmospheric parameters (Teff, log g, [Fe/H] for F-G-K dwarf and giant stars. The tool, called GAUFRE, is composed of several routines written in C++: GAUFRE-RV measures radial velocity from spectra via cross-correlation against a synthetic template, GAUFRE-EW measures atmospheric parameters through the classic line-by-line technique and GAUFRE-CHI2 performs a ��2 fitting to a library of synthetic spectra. A set of F-G-K stars extensively studied in the literature were used as a benchmark for the program: their high signal-to-noise and high resolution spectra were analyzed by using GAUFRE and results were compared with those present in literature. The tool is also implemented in order to perform the spectral analysis after fixing the surface gravity (log g to the accurate value provided by asteroseismology. A set of CoRoT stars, belonging to LRc01 and LRa01 fields was used for first testing the performances and the behavior of the program when using the seismic log g.

  12. Parameter Optimization of Multi-Element Synthetic Aperture Imaging Systems

    Directory of Open Access Journals (Sweden)

    Vera Behar

    2007-03-01

    Full Text Available In conventional ultrasound imaging systems with phased arrays, the further improvement of lateral resolution requires enlarging of the number of array elements that in turn increases both, the complexity and the cost, of imaging systems. Multi-element synthetic aperture focusing (MSAF systems are a very good alternative to conventional systems with phased arrays. The benefit of the synthetic aperture is in reduction of the system complexity, cost and acquisition time. In a MSAF system considered in the paper, a group of elements transmit and receive signals simultaneously, and the transmit beam is defocused to emulate a single element response. The echo received at each element of a receive sub-aperture is recorded in the computer memory. The process of transmission/reception is repeated for all positions of a transmit sub-aperture. All the data recordings associated with each corresponding pair "transmit-receive sub-aperture" are then focused synthetically producing a low-resolution image. The final high-resolution image is formed by summing of the all low-resolution images associated with transmit/receive sub-apertures. A problem of parameter optimization of a MSAF system is considered in this paper. The quality of imaging (lateral resolution and contrast is expressed in terms of the beam characteristics - beam width and side lobe level. The comparison between the MSAF system described in the paper and an equivalent conventional phased array system shows that the MSAF system acquires images of equivalent quality much faster using only a small part of the power per image.

  13. Mixed-integer evolution strategies for parameter optimization and their applications to medical image analysis

    NARCIS (Netherlands)

    Li, Rui

    2009-01-01

    The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of

  14. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Ungun, B [Stanford University School of Medicine, Stanford, CA (United States); Stanford University School of Engineering, Stanford, CA (United States); Boyd, S [Stanford University School of Engineering, Stanford, CA (United States)

    2016-06-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  15. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    International Nuclear Information System (INIS)

    Dong, P; Xing, L; Ungun, B; Boyd, S

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  16. Automated system for calibration and control of the CHSPP-800 multichannel γ detector parameters

    International Nuclear Information System (INIS)

    Avvakumov, N.A.; Belikov, N.I.; Goncharenko, Yu.M.

    1987-01-01

    An automated system for adjustment, calibration and control of total absorption Cherenkov spectrometer is described. The system comprises a mechanical platform, capable of moving in two mutually perpendicular directions; movement detectors and limit switches; power unit, automation unit with remote control board. The automated system can operate both in manual control regime with coordinate control by a digital indicator, and in operation regime with computer according to special programs. The platform mounting accuracy is ± 0.1 mm. Application of the automated system has increased the rate of the course of the counter adjustment works 3-5 times

  17. A procedure for multi-objective optimization of tire design parameters

    OpenAIRE

    Nikola Korunović; Miloš Madić; Miroslav Trajanović; Miroslav Radovanović

    2015-01-01

    The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zo...

  18. Optimization of Loudspeaker Part Design Parameters by Air Viscosity Damping Effect

    OpenAIRE

    Yue Hu; Xilu Zhao; Takao Yamaguchi; Manabu Sasajima; Yoshio Koike; Akira Hara

    2016-01-01

    This study optimized the design parameters of a cone loudspeaker as an example of high flexibility of the product design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to optimize each parameter of the loudspeaker design. To overcome the limitation of the design problem in practice, this study presents an acoustic analysis algorithm to optimize the design parameters of the loudspeaker. Th...

  19. The Spiral Discovery Network as an Automated General-Purpose Optimization Tool

    Directory of Open Access Journals (Sweden)

    Adam B. Csapo

    2018-01-01

    Full Text Available The Spiral Discovery Method (SDM was originally proposed as a cognitive artifact for dealing with black-box models that are dependent on multiple inputs with nonlinear and/or multiplicative interaction effects. Besides directly helping to identify functional patterns in such systems, SDM also simplifies their control through its characteristic spiral structure. In this paper, a neural network-based formulation of SDM is proposed together with a set of automatic update rules that makes it suitable for both semiautomated and automated forms of optimization. The behavior of the generalized SDM model, referred to as the Spiral Discovery Network (SDN, and its applicability to nondifferentiable nonconvex optimization problems are elucidated through simulation. Based on the simulation, the case is made that its applicability would be worth investigating in all areas where the default approach of gradient-based backpropagation is used today.

  20. Automated valve fault detection based on acoustic emission parameters and support vector machine

    Directory of Open Access Journals (Sweden)

    Salah M. Ali

    2018-03-01

    Full Text Available Reciprocating compressors are one of the most used types of compressors with wide applications in industry. The most common failure in reciprocating compressors is always related to the valves. Therefore, a reliable condition monitoring method is required to avoid the unplanned shutdown in this category of machines. Acoustic emission (AE technique is one of the effective recent methods in the field of valve condition monitoring. However, a major challenge is related to the analysis of AE signal which perhaps only depends on the experience and knowledge of technicians. This paper proposes automated fault detection method using support vector machine (SVM and AE parameters in an attempt to reduce human intervention in the process. Experiments were conducted on a single stage reciprocating air compressor by combining healthy and faulty valve conditions to acquire the AE signals. Valve functioning was identified through AE waveform analysis. SVM faults detection model was subsequently devised and validated based on training and testing samples respectively. The results demonstrated automatic valve fault detection model with accuracy exceeding 98%. It is believed that valve faults can be detected efficiently without human intervention by employing the proposed model for a single stage reciprocating compressor. Keywords: Condition monitoring, Faults detection, Signal analysis, Acoustic emission, Support vector machine

  1. A Comparative Experimental Study on the Use of Machine Learning Approaches for Automated Valve Monitoring Based on Acoustic Emission Parameters

    Science.gov (United States)

    Ali, Salah M.; Hui, K. H.; Hee, L. M.; Salman Leong, M.; Al-Obaidi, M. A.; Ali, Y. H.; Abdelrhman, Ahmed M.

    2018-03-01

    Acoustic emission (AE) analysis has become a vital tool for initiating the maintenance tasks in many industries. However, the analysis process and interpretation has been found to be highly dependent on the experts. Therefore, an automated monitoring method would be required to reduce the cost and time consumed in the interpretation of AE signal. This paper investigates the application of two of the most common machine learning approaches namely artificial neural network (ANN) and support vector machine (SVM) to automate the diagnosis of valve faults in reciprocating compressor based on AE signal parameters. Since the accuracy is an essential factor in any automated diagnostic system, this paper also provides a comparative study based on predictive performance of ANN and SVM. AE parameters data was acquired from single stage reciprocating air compressor with different operational and valve conditions. ANN and SVM diagnosis models were subsequently devised by combining AE parameters of different conditions. Results demonstrate that ANN and SVM models have the same results in term of prediction accuracy. However, SVM model is recommended to automate diagnose the valve condition in due to the ability of handling a high number of input features with low sampling data sets.

  2. Optimization of machining parameters of hard porcelain on a CNC ...

    African Journals Online (AJOL)

    s (2010) focus was to calculate drilled composite's surface roughness with the application of ... instance, objective function as well as restrictions on rotor enactment. ..... to aerodynamic optimization design of helicopter rotor blade, International.

  3. Characterization and optimized control by means of multi-parameter controllers

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Carsten; Hoeg, S.; Thoegersen, A. (Dan-Ejendomme, Hellerup (Denmark)) (and others)

    2009-07-01

    Poorly functioning HVAC systems (Heating, Ventilation and Air Conditioning), but also separate heating, ventilation and air conditioning systems are costing the Danish society billions of kroner every year: partly because of increased energy consumption and high operational and maintenance costs, but mainly due to reduced productivity and absence due to illness because of a poor indoor climate. Typically, the operation of buildings and installations takes place today with traditional build-ing automation, which is characterised by 1) being based on static considerations 2) the individual sensor being coupled with one actuator/valve, i.e. the sensor's signal is only used in one place in the system 3) subsystems often being controlled independently of each other 4) the dynamics in building constructions and systems which is very important to the system and comfort regulation is not being considered. This, coupled with the widespread tendency to use large glass areas in the facades without sufficient sun shading, means that it is difficult to optimise comfort and energy consumption. Therefore, the last 10-20 years have seen a steady increase in the complaints of the indoor climate in Danish buildings and, at the same time, new buildings often turn out to be considerably higher energy consuming than expected. The purpose of the present project is to investigate the type of multi parameter sensors which may be generated for buildings and further to carry out a preliminary evaluation on how such multi parameter controllers may be utilized for optimal control of buildings. The aim of the project isn't to develop multi parameter controllers - this requires much more effort than possible in the present project. The aim is to show the potential of using multi parameter sensors when controlling buildings. For this purpose a larger office building has been chosen - an office building with at high energy demand and complaints regarding the indoor climate. In order to

  4. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  5. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Taishan Medical University, Taian, Shandong (China); Washington University in St Louis, St Louis, MO (United States); Li, H. Harlod; Zhang, T; Yang, D [Washington University in St Louis, St Louis, MO (United States); Ma, F [Taishan Medical University, Taian, Shandong (China)

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  6. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-01-01

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  7. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A; Mert, M [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H A [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  8. Optimization of VPSC Model Parameters for Two-Phase Titanium Alloys: Flow Stress Vs Orientation Distribution Function Metrics

    Science.gov (United States)

    Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.

    2018-06-01

    The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.

  9. Optimizing design parameter for light isotopes separation by distillation method

    International Nuclear Information System (INIS)

    Ahmadi, M.

    1999-01-01

    More than methods are suggested in the world for producing heavy water, where between them chemical isotopic methods, distillation and electro lys are used widely in industrial scale. To select suitable method for heavy water production in Iran, taking into consideration, domestic technology an facilities, combination of hydrogen sulphide-water dual temperature process (Gs) and distillation (D W) may be proposed. Natural water, is firstly enriched up to 15 a% by G S process and then by distillation unit is enriched up to the grade necessary for Candu type reactors (99.8 a%). The aim of present thesis, is to achieve know-how, optimization of design parameters, and executing basic design for water isotopes separation using distillation process in a plant having minimum scale possible. In distillation, vapour phase resulted from liquid phase heating, is evidently composed of the same constituents as liquid phase. In isotopic distillation, the difference in composition of constituents is not considerable. In fact alteration of constituents composition is so small that makes the separation process impossible, however, direct separation and production of pure products without further processing which becomes possible by distillation, makes this process as one of the most important separation processes. Profiting distillation process to produce heavy water is based on difference existing between boiling point of heavy and light water. The trends of boiling points differences (heavy and light water) is adversely dependant with pressure. As the whole system pressure decreases, difference in boiling points increases. On the other hand according to the definition, separation factor is equal to the ratio of pure light water vapour pressure to that of heavy water, or we can say that the trend of whole system pressure decrease results in separation factor increase, which accordingly separation factor equation to pressure variable should be computed firstly. According to the

  10. Computer controlled automated assay for comprehensive studies of enzyme kinetic parameters.

    Directory of Open Access Journals (Sweden)

    Felix Bonowski

    Full Text Available Stability and biological activity of proteins is highly dependent on their physicochemical environment. The development of realistic models of biological systems necessitates quantitative information on the response to changes of external conditions like pH, salinity and concentrations of substrates and allosteric modulators. Changes in just a few variable parameters rapidly lead to large numbers of experimental conditions, which go beyond the experimental capacity of most research groups. We implemented a computer-aided experimenting framework ("robot lab assistant" that allows us to parameterize abstract, human-readable descriptions of micro-plate based experiments with variable parameters and execute them on a conventional 8 channel liquid handling robot fitted with a sensitive plate reader. A set of newly developed R-packages translates the instructions into machine commands, executes them, collects the data and processes it without user-interaction. By combining script-driven experimental planning, execution and data-analysis, our system can react to experimental outcomes autonomously, allowing outcome-based iterative experimental strategies. The framework was applied in a response-surface model based iterative optimization of buffer conditions and investigation of substrate, allosteric effector, pH and salt dependent activity profiles of pyruvate kinase (PYK. A diprotic model of enzyme kinetics was used to model the combined effects of changing pH and substrate concentrations. The 8 parameters of the model could be estimated from a single two-hour experiment using nonlinear least-squares regression. The model with the estimated parameters successfully predicted pH and PEP dependence of initial reaction rates, while the PEP concentration dependent shift of optimal pH could only be reproduced with a set of manually tweaked parameters. Differences between model-predictions and experimental observations at low pH suggest additional protonation

  11. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  12. Optimization of Key Parameters of Energy Management Strategy for Hybrid Electric Vehicle Using DIRECT Algorithm

    Directory of Open Access Journals (Sweden)

    Jingxian Hao

    2016-11-01

    Full Text Available The rule-based logic threshold control strategy has been frequently used in energy management strategies for hybrid electric vehicles (HEVs owing to its convenience in adjusting parameters, real-time performance, stability, and robustness. However, the logic threshold control parameters cannot usually ensure the best vehicle performance at different driving cycles and conditions. For this reason, the optimization of key parameters is important to improve the fuel economy, dynamic performance, and drivability. In principle, this is a multiparameter nonlinear optimization problem. The logic threshold energy management strategy for an all-wheel-drive HEV is comprehensively analyzed and developed in this study. Seven key parameters to be optimized are extracted. The optimization model of key parameters is proposed from the perspective of fuel economy. The global optimization method, DIRECT algorithm, which has good real-time performance, low computational burden, rapid convergence, is selected to optimize the extracted key parameters globally. The results show that with the optimized parameters, the engine operates more at the high efficiency range resulting into a fuel savings of 7% compared with non-optimized parameters. The proposed method can provide guidance for calibrating the parameters of the vehicle energy management strategy from the perspective of fuel economy.

  13. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  14. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    Science.gov (United States)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  15. Optimization of turning process parameters by using grey-Taguchi

    African Journals Online (AJOL)

    DR OKE

    ... India continue to choose the operating conditions solely on the basis of handbook values .... Surface Roughness Measuring instrument ... process control parameters like spindle speed, feed and depth of cut. ..... and Industrial Engineering.

  16. Optimization of injection moulding process parameters in the ...

    African Journals Online (AJOL)

    In this study, optimal injection moulding conditions for minimum shrinkage during moulding of High Density Polyethylene (HDPE) were obtained by Taguchi method. The result showed that melting temperature of 190OC, injection pressure of 55 MPa, refilling pressure of 85 MPa and cooling time of 11 seconds gave ...

  17. Air Compressor Driving with Synchronous Motors at Optimal Parameters

    Directory of Open Access Journals (Sweden)

    Iuliu Petrica

    2010-10-01

    Full Text Available In this paper a method of optimal compensation of the reactive load by the synchronous motors, driving the air compressors, used in mining enterprises is presented, taking into account that in this case, the great majority of the equipment (compressors, pumps are generally working a constant load.

  18. Optimizing Acquisition Parameters for MASW in Shallow Water

    NARCIS (Netherlands)

    Diaferia, G.; Kruiver, P.P.; Drijkoningen, G.G.

    2013-01-01

    Analogous to the use of Rayleigh waves in MASW on land, Scholte waves can be used to derive shear wave velocity profiles for the subsurface under water. These profiles are useful for dredging operations, offshore wind farms, oil rigs and pipelines. We have determined the optimal acquisition set up

  19. Quantitative assessment of Aluminium cast Alloys` structural parameters to optimize ITS properties

    Directory of Open Access Journals (Sweden)

    L. Kuchariková

    2017-01-01

    Full Text Available The present work deals with evaluation of eutectic Si (its shape, size, and distribution, dendrite cell size and dendrite arm spacing in aluminium cast alloys which were cast into different moulds (sand and metallic. Structural parameters were evaluated using NIS-Elements image analyser software. This software is imaging analysis software for the evaluation, capture, archiving and automated measurement of structural parameters. The control of structural parameters by NIS Elements shows that optimum mechanical properties of aluminium cast alloys strongly depend on the distribution, morphology, size of eute ctic Si and matrix parameters.

  20. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    Science.gov (United States)

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  1. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  2. Optimization of control parameters for petroleum waste composting

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Composting is being widely employed in the treatment of petroleum waste. The purpose of this study was to find the optimum control parameters for petroleum waste in-vessel composting. Various physical and chemical parameters were monitored to evaluate their influence on the microbial communities present in composting. The CO2 evolution and the number of microorganisms were measured as theactivity of composting. The results demonstrated that the optimum temperature, pH and moisture content were 56.5-59.5, 7.0-8.5 and 55%-60%, respectively. Under the optimum conditions, the removal efficiency of petroleum hydrocarbon reached 83.29% after 30 days composting.

  3. Improved Artificial Fish Algorithm for Parameters Optimization of PID Neural Network

    OpenAIRE

    Jing Wang; Yourui Huang

    2013-01-01

    In order to solve problems such as initial weights are difficult to be determined, training results are easy to trap in local minima in optimization process of PID neural network parameters by traditional BP algorithm, this paper proposed a new method based on improved artificial fish algorithm for parameters optimization of PID neural network. This improved artificial fish algorithm uses a composite adaptive artificial fish algorithm based on optimal artificial fish and nearest artificial fi...

  4. Optimization of MIS/IL solar cells parameters using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, K.A.; Mohamed, E.A.; Alaa, S.H. [Faculty of Engineering, Alexandria Univ. (Egypt); Motaz, M.S. [Institute of Graduate Studies and Research, Alexandria Univ. (Egypt)

    2004-07-01

    This paper presents a genetic algorithm optimization for MIS/IL solar cell parameters including doping concentration NA, metal work function {phi}m, oxide thickness d{sub ox}, mobile charge density N{sub m}, fixed oxide charge density N{sub ox} and the external back bias applied to the inversion grid V. The optimization results are compared with theoretical optimization and shows that the genetic algorithm can be used for determining the optimum parameters of the cell. (orig.)

  5. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    Science.gov (United States)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.

  6. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  7. Plug-and-play monitoring and performance optimization for industrial automation processes

    CERN Document Server

    Luo, Hao

    2017-01-01

    Dr.-Ing. Hao Luo demonstrates the developments of advanced plug-and-play (PnP) process monitoring and control systems for industrial automation processes. With aid of the so-called Youla parameterization, a novel PnP process monitoring and control architecture (PnP-PMCA) with modularized components is proposed. To validate the developments, a case study on an industrial rolling mill benchmark is performed, and the real-time implementation on a laboratory brushless DC motor is presented. Contents PnP Process Monitoring and Control Architecture Real-Time Configuration Techniques for PnP Process Monitoring Real-Time Configuration Techniques for PnP Performance Optimization Benchmark Study and Real-Time Implementation Target Groups Researchers and students of Automation and Control Engineering Practitioners in the area of Industrial and Production Engineering The Author Hao Luo received the Ph.D. degree at the Institute for Automatic Control and Complex Systems (AKS) at the University of Duisburg-Essen, Germany, ...

  8. Optimization of Storage Parameters of Selected Fruits in Passive ...

    African Journals Online (AJOL)

    This study was carried out to determine the optimum storage parameters of selected fruit using three sets of four types of passive evaporative cooling structures made of two different materials clay and aluminium. One set consisted of four separate cooling chambers. Two cooling chambers were made with aluminium ...

  9. Optimal Two Parameter Bounds for the Seiffert Mean

    Directory of Open Access Journals (Sweden)

    Hui Sun

    2013-01-01

    Full Text Available We obtain sharp bounds for the Seiffert mean in terms of a two parameter family of means. Our results generalize and extend the recent bounds presented in the Journal of Inequalities and Applications (2012 and Abstract and Applied Analysis (2012.

  10. Investigation and validation of optimal cutting parameters for least ...

    African Journals Online (AJOL)

    user

    Turning is carried on lathe that provides the power to turn the work piece at a given rotational speed and ... The cutting parameters influencing the surface finish in EN24 is to be studied ...... Design from Anna University, Chennai, India in 2004.

  11. Optimization of physico-chemical and nutritional parameters for ...

    African Journals Online (AJOL)

    hope&shola

    2010-10-25

    Oct 25, 2010 ... industrial production in order to reduce the cost of production. ... is of great economic importance with increased appli- ... industries (Seviour et al., 1992; Leathers, 2003). .... The various process parameters influencing pullulan production ..... formation by Aureobasidium pullulans in stirred tanks. Enzyme.

  12. Optimization and Analysis of Cutting Tool Geometrical Parameters ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Bassett et al.,(2012);. Kountanya et al., (2016) studied the effect of tool edge geometry and cutting conditions on the chip morphology in orthogonal hard turning of 100Cr6 steel. Their study shows that the edge radius does not influence the geometrical parameters of the chip. Moreover cutting forces decreases as the cutting.

  13. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  14. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  15. Automated forensic DNA purification optimized for FTA card punches and identifiler STR-based PCR analysis.

    Science.gov (United States)

    Tack, Lois C; Thomas, Michelle; Reich, Karl

    2007-03-01

    Forensic labs globally face the same problem-a growing need to process a greater number and wider variety of samples for DNA analysis. The same forensic lab can be tasked all at once with processing mixed casework samples from crime scenes, convicted offender samples for database entry, and tissue from tsunami victims for identification. Besides flexibility in the robotic system chosen for forensic automation, there is a need, for each sample type, to develop new methodology that is not only faster but also more reliable than past procedures. FTA is a chemical treatment of paper, unique to Whatman Bioscience, and is used for the stabilization and storage of biological samples. Here, the authors describe optimization of the Whatman FTA Purification Kit protocol for use with the AmpFlSTR Identifiler PCR Amplification Kit.

  16. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    Science.gov (United States)

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Evaluation of a Multi-Parameter Sensor for Automated, Continuous Cell Culture Monitoring in Bioreactors

    Science.gov (United States)

    Pappas, D.; Jeevarajan, A.; Anderson, M. M.

    2004-01-01

    Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments in microgravity. Measurement of cell culture medium allows for the optirn.jzation of culture conditions on orbit to maximize cell growth and minimize unnecessary exchange of medium. While several discrete sensors exist to measure culture health, a multi-parameter sensor would simplify the experimental apparatus. One such sensor, the Paratrend 7, consists of three optical fibers for measuring pH, dissolved oxygen (p02), dissolved carbon dioxide (pC02) , and a thermocouple to measure temperature. The sensor bundle was designed for intra-arterial placement in clinical patients, and potentially can be used in NASA's Space Shuttle and International Space Station biotechnology program bioreactors. Methods: A Paratrend 7 sensor was placed at the outlet of a rotating-wall perfused vessel bioreactor system inoculated with BHK-21 (baby hamster kidney) cells. Cell culture medium (GTSF-2, composed of 40% minimum essential medium, 60% L-15 Leibovitz medium) was manually measured using a bench top blood gas analyzer (BGA, Ciba-Corning). Results: A Paratrend 7 sensor was used over a long-term (>120 day) cell culture experiment. The sensor was able to track changes in cell medium pH, p02, and pC02 due to the consumption of nutrients by the BHK-21. When compared to manually obtained BGA measurements, the sensor had good agreement for pH, p02, and pC02 with bias [and precision] of 0.02 [0.15], 1 mm Hg [18 mm Hg], and -4.0 mm Hg [8.0 mm Hg] respectively. The Paratrend oxygen sensor was recalibrated (offset) periodically due to drift. The bias for the raw (no offset or recalibration) oxygen measurements was 42 mm Hg [38 mm Hg]. The measured response (rise) time of the sensor was 20 +/- 4s for pH, 81 +/- 53s for pC02, 51 +/- 20s for p02. For long-term cell culture measurements, these response times are more than adequate. Based on these findings , the Paratrend sensor could

  18. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    Science.gov (United States)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  19. Intelligent Mechatronics Systems for Transport Climate Parameters Optimization Using Fuzzy Logic Control

    OpenAIRE

    Beinarts, I; Ļevčenkovs, A; Kuņicina, N

    2007-01-01

    In article interest is concentrated on the climate parameters optimization in passengers’ salon of public electric transportation vehicles. The article presents mathematical problem for using intelligent agents in mechatronics problems for climate parameters optimal control. Idea is to use fuzzy logic and intelligent algorithms to create coordination mechanism for climate parameters control to save electrical energy, and it increases the level of comfort for passengers. A special interest for...

  20. Air conditioning with methane: Efficiency and economics optimization parameters

    International Nuclear Information System (INIS)

    Mastrullo, R.; Sasso, M.; Sibilio, S.; Vanoli, R.

    1992-01-01

    This paper presents an efficiency and economics evaluation method for methane fired cooling systems. Focus is on direct flame two staged absorption systems and alternative engine driven compressor sets. Comparisons are made with conventional vapour compression plants powered by electricity supplied by the national grid. A first and second law based thermodynamics analysis is made in which fuel use coefficients and exergy yields are determined. The economics analysis establishes annual energy savings, unit cooling energy production costs, payback periods and economics/efficiency optimization curves useful for preliminary feasibility studies

  1. Parameter selection for the SSC trade-offs and optimization

    International Nuclear Information System (INIS)

    Edwards, D.A.; Syphers, M.J.

    1991-01-01

    In November of 1988, a site was selected in the state of Texas for the SSC. In January of 1989, the SSC Laboratory was established in Texas to adapt the design of the collider to the site and to manage the construction of the project. This paper describes the evolution of the SSC design since site selection, notes the increased concentration on the injector system, and addresses the rationale for choice of parameters

  2. Diagnostics and tolerance optimization of nuclear facilities parameters

    International Nuclear Information System (INIS)

    Novak, M.; Otcenasek, P.

    1988-01-01

    The possibilities are discussed of applying the theory of tolerances for assessing the service life of systems in nuclear power. The approach proceeds from the postulate that the nuclear power plant is on the one hand an extremely sophisticated technical system and on the other a system with well defined demands on the state and properties of components and of the whole system, with rationally defined limits, conditions of permissible states and limit values. It is stated that the basic ideas of the theory of the hot channel may be extended and generalized. The theory was initially limited to the study of relations between deviations in nucleaer and non-nuclear parameters in the fuel assembly, and the temperature field; it can be generalized to the analysis of permissible parameter tolerances of the whole system. The foundations are outlined of the theory of tolerances of general technical system parameters. Brief attention is also paid to the general possibilities of the use of life curves for designing the system such as would extend its service life. (Z.M.). 10 figs., 5 refs

  3. Optimization of process parameters through GRA, TOPSIS and RSA models

    Directory of Open Access Journals (Sweden)

    Suresh Nipanikar

    2018-01-01

    Full Text Available This article investigates the effect of cutting parameters on the surface roughness and flank wear during machining of titanium alloy Ti-6Al-4V ELI( Extra Low Interstitial in minimum quantity lubrication environment by using PVD TiAlN insert. Full factorial design of experiment was used for the machining 2 factors 3 levels and 2 factors 2 levels. Turning parameters studied were cutting speed (50, 65, 80 m/min, feed (0.08, 0.15, 0.2 mm/rev and depth of cut 0.5 mm constant. The results show that 44.61 % contribution of feed and 43.57 % contribution of cutting speed on surface roughness also 53.16 % contribution of cutting tool and 26.47 % contribution of cutting speed on tool flank wear. Grey relational analysis and TOPSIS method suggest the optimum combinations of machining parameters as cutting speed: 50 m/min, feed: 0.8 mm/rev., cutting tool: PVD TiAlN, cutting fluid: Palm oi

  4. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-14

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  5. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  6. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  7. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  8. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    Science.gov (United States)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  9. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  10. Parameters and design optimization of the ring piezoelectric ceramic transformer

    Directory of Open Access Journals (Sweden)

    Jiří Erhart

    2015-09-01

    Full Text Available Main aim of the presented paper is the theoretical analysis and experimental verification of the transformation parameters for the new type of nonhomogeneously poled ring transformer. The input part is poled in the thickness direction and output part in the radial direction. Two transformer geometries are studied — the input part is at inner ring segment, or it is at the outer ring segment. The optimum electrode size aspect ratios have been found experimentally as d1∕D≈0.60−0.65 for the ring with aspect ratio d∕D=0.2. The fundamental as well as higher overtone resonances were studied for the transformation ratio, the optimum resistive load, efficiency and no-load transformation ratio. Higher overtones have better transformation parameters compared to the fundamental resonance. The new type ring transformer exhibits very high transformation ratios up to 200 under no-load and up to 13.4 under a high efficiency of 97% at the optimum load conditions of 10 kΩ. Strong electric field gradient at the output circuit is applicable for the electrical discharge generation.

  11. Joint optimization of collimator and reconstruction parameters in SPECT imaging for lesion quantification

    International Nuclear Information System (INIS)

    McQuaid, Sarah J; Southekal, Sudeepti; Kijewski, Marie Foley; Moore, Stephen C

    2011-01-01

    Obtaining the best possible task performance using reconstructed SPECT images requires optimization of both the collimator and reconstruction parameters. The goal of this study is to determine how to perform this optimization, namely whether the collimator parameters can be optimized solely from projection data, or whether reconstruction parameters should also be considered. In order to answer this question, and to determine the optimal collimation, a digital phantom representing a human torso with 16 mm diameter hot lesions (activity ratio 8:1) was generated and used to simulate clinical SPECT studies with parallel-hole collimation. Two approaches to optimizing the SPECT system were then compared in a lesion quantification task: sequential optimization, where collimation was optimized on projection data using the Cramer–Rao bound, and joint optimization, which simultaneously optimized collimator and reconstruction parameters. For every condition, quantification performance in reconstructed images was evaluated using the root-mean-squared-error of 400 estimates of lesion activity. Compared to the joint-optimization approach, the sequential-optimization approach favoured a poorer resolution collimator, which, under some conditions, resulted in sub-optimal estimation performance. This implies that inclusion of the reconstruction parameters in the optimization procedure is important in obtaining the best possible task performance; in this study, this was achieved with a collimator resolution similar to that of a general-purpose (LEGP) collimator. This collimator was found to outperform the more commonly used high-resolution (LEHR) collimator, in agreement with other task-based studies, using both quantification and detection tasks.

  12. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    Science.gov (United States)

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  13. Parameter estimation for chaotic systems with a Drift Particle Swarm Optimization method

    International Nuclear Information System (INIS)

    Sun Jun; Zhao Ji; Wu Xiaojun; Fang Wei; Cai Yujie; Xu Wenbo

    2010-01-01

    Inspired by the motion of electrons in metal conductors in an electric field, we propose a variant of Particle Swarm Optimization (PSO), called Drift Particle Swarm Optimization (DPSO) algorithm, and apply it in estimating the unknown parameters of chaotic dynamic systems. The principle and procedure of DPSO are presented, and the algorithm is used to identify Lorenz system and Chen system. The experiment results show that for the given parameter configurations, DPSO can identify the parameters of the systems accurately and effectively, and it may be a promising tool for chaotic system identification as well as other numerical optimization problems in physics.

  14. Metallic Fuel Casting Development and Parameter Optimization Simulations

    International Nuclear Information System (INIS)

    Fielding, Randall S.; Kennedy, J.R.; Crapps, J.; Unal, C.

    2013-01-01

    Conclusions: • Gravity casting is a feasible process for casting of metallic fuels: – May not be as robust as CGIC, more parameter dependent to find right “sweet spot” for high quality castings; – Fluid flow is very important and is affected by mold design, vent size, super heat, etc.; – Pressure differential assist was found to be detrimental. • Simulation found that vent location was important to allow adequate filling of mold; • Surface tension plays an important role in determining casting quality; • Casting and simulations high light the need for better characterized fluid physical and thermal properties; • Results from simulations will be incorporated in GACS design such as vent location and physical property characterization

  15. Beyond bixels: Generalizing the optimization parameters for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Markman, Jerry; Low, Daniel A.; Beavis, Andrew W.; Deasy, Joseph O.

    2002-01-01

    Intensity modulated radiation therapy (IMRT) treatment planning systems optimize fluence distributions by subdividing the fluence distribution into rectangular bixels. The algorithms typically optimize the fluence intensity directly, often leading to fluence distributions with sharp discontinuities. These discontinuities may yield difficulties in delivery of the fluence distribution, leading to inaccurate dose delivery. We have developed a method for decoupling the bixel intensities from the optimization parameters; either by introducing optimization control points from which the bixel intensities are interpolated or by parametrizing the fluence distribution using basis functions. In either case, the number of optimization search parameters is reduced from the direct bixel optimization method. To illustrate the concept, the technique is applied to two-dimensional idealized head and neck treatment plans. The interpolation algorithms investigated were nearest-neighbor, linear and cubic spline, and radial basis functions serve as the basis function test. The interpolation and basis function optimization techniques were compared against the direct bixel calculation. The number of optimization parameters were significantly reduced relative to the bixel optimization, and this was evident in the reduction of computation time of as much as 58% from the full bixel optimization. The dose distributions obtained using the reduced optimization parameter sets were very similar to the full bixel optimization when examined by dose distributions, statistics, and dose-volume histograms. To evaluate the sensitivity of the fluence calculations to spatial misalignment caused either by delivery errors or patient motion, the doses were recomputed with a 1 mm shift in each beam and compared to the unshifted distributions. Except for the nearest-neighbor algorithm, the reduced optimization parameter dose distributions were generally less sensitive to spatial shifts than the bixel

  16. Design And Modeling An Automated Digsilent Power System For Optimal New Load Locations

    Directory of Open Access Journals (Sweden)

    Mohamed Saad

    2015-08-01

    Full Text Available Abstract The electric power utilities seek to take advantage of novel approaches to meet growing energy demand. Utilities are under pressure to evolve their classical topologies to increase the usage of distributed generation. Currently the electrical power engineers in many regions of the world are implementing manual methods to measure power consumption for farther assessment of voltage violation. Such process proved to be time consuming costly and inaccurate. Also demand response is a grid management technique where retail or wholesale customers are requested either electronically or manually to reduce their load. Therefore this paper aims to design and model an automated power system for optimal new load locations using DPL DIgSILENT Programming Language. This study is a diagnostic approach that assists system operator about any voltage violation cases that would happen during adding new load to the grid. The process of identifying the optimal bus bar location involves a complicated calculation of the power consumptions at each load bus As a result the DPL program would consider all the IEEE 30 bus internal networks data then a load flow simulation will be executed. To add the new load to the first bus in the network. Therefore the developed model will simulate the new load at each available bus bar in the network and generate three analytical reports for each case that captures the overunder voltage and the loading elements among the grid.

  17. PARAMETERS OPTIMIZATION OF METAL-DIELECTRIC NANOSTRUCTURES FOR SENSOR APPLICATIONS

    Directory of Open Access Journals (Sweden)

    V. I. Egorov

    2014-07-01

    Full Text Available We present calculation results of optical properties of silver nanoparticles with dielectric shell in relation to their applications in chemical and biosensors. Absorption cross-section calculation for spherical silver nanoparticles was performed by quasi static dipole approximation. It is shown that dielectric shell thickness equal to 2-3 nm and its refraction index equal to 1,5-1,75 are optimal. Calculation results were compared to experimental data. Experimental investigation of metal-dielectric nanostructures sensitivity to external refraction index was performed. Synthesis of silver nanoparticles with dielectric shell on glass surface was performed by nanosecond laser ablation method in near-surface glass layer at 1,06 μm wavelength (Solar LQ129. Synthesis of silver nanoparticles without a shell on the glass surface with silver ions was performed using thermal treatment in wet atmosphere. Spectrophotometer Cary 500 (Varyan was used for spectral measurements. In case of laser ablation method application, external refraction index changes from 1 (the air to 1,33 (water and plasmon resonance band shift for 6 nm occurs. In case of another method application at the same conditions the registered shift was equal to 13 nm. However, in the latter case the particles can be easily removed from the substrate surface. Obtained results will be useful for developing chemical and biological sensors based on plasmon resonance band shift.

  18. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    International Nuclear Information System (INIS)

    Jayalal, M.L.; Kumar, L. Satish; Jehadeesan, R.; Rajeswari, S.; Satya Murty, S.A.V.; Balasubramaniyan, V.; Chetal, S.C.

    2011-01-01

    Highlights: → We model design optimization of a vital reactor component using Genetic Algorithm. → Real-parameter Genetic Algorithm is used for steam condenser optimization study. → Comparison analysis done with various Genetic Algorithm related mechanisms. → The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  19. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    Science.gov (United States)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  20. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jayalal, M.L., E-mail: jayalal@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Kumar, L. Satish, E-mail: satish@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Jehadeesan, R., E-mail: jeha@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Rajeswari, S., E-mail: raj@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Satya Murty, S.A.V., E-mail: satya@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Balasubramaniyan, V.; Chetal, S.C. [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India)

    2011-10-15

    Highlights: > We model design optimization of a vital reactor component using Genetic Algorithm. > Real-parameter Genetic Algorithm is used for steam condenser optimization study. > Comparison analysis done with various Genetic Algorithm related mechanisms. > The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  1. Optimization of Performance Parameters for Large Area Silicon Photomultipliers

    Science.gov (United States)

    Janzen, Kathryn

    2008-10-01

    The goal of the GlueX experiment is to search for exotic hybrid mesons as evidence of gluonic excitations in an effort to better understand confinement. A key component of the GlueX detector is the electromagnetic barrel calorimeter (BCAL) located immediately inside a superconducting solenoid of approximately 2.5T. Because of this arrangement, traditional vacuum photomultiplier tubes (PMTs) which are affected significantly by magnetic fields cannot be used on the BCAL. The use of Silicon photomultipliers (SiPMs) as front-end detectors has been proposed. While the largest SiPMs that have been previously employed by other experiments are 1x1 mm^2, GlueX proposes to use large area SiPMs each composed of 16 - 3x3 mm^2 cells in a 4x4 array. This puts the GlueX collaboration in the unique position of driving the technology for larger area sensors. In this talk I will discuss tests done in Regina regarding performance parameters of prototype SiPM arrays delivered by SensL, a photonics research and development company based in Ireland, as well as sample 1x1 mm^2 and 3x3 mm^2 SiPMs.

  2. OPTIMIZING THE DESIGN OF THE SYSTEMS OF INFORMATION PROTECTION IN AUTOMATED INFORMATIONAL SYSTEMS OF INDUSTRIAL ENTERPRISES

    Directory of Open Access Journals (Sweden)

    I. E. L'vovich

    2014-01-01

    Full Text Available Summary. Now to increase of indicators of efficiency and operability of difficult systems apply an automation equipment. The increasing role of information which became universal goods for relationship between various structures is noted. The question of its protection becomes the most actual. Special value is allocated for optimum design at creation of systems of the protection, allowing with the greatest probability to choose the best decisions on a set of alternatives. Now it becomes actual for the majority of the industrial enterprises as correctly designed and introduced system of protection will be pledge of successful functioning and competitiveness of all organization. Stages of works on creation of an information security system of the industrial enterprise are presented. The attention is focused on one of the most important approaches to realization of optimum design – multialternative optimization. In article the structure of creation of system of protection from the point of view of various models is considered, each of which gives an idea of features of design of system as a whole. The special attention is paid to a problem of creation of an information security system as it has the most difficult structure. Tasks for processes of automation of each of design stages of system of information security of the industrial enterprises are designated. Idea of each of stages of works is given at design of system of protection that allows to understand in the best way internal structure of creation of system of protection. Therefore, it is given the chance of evident submission of necessary requirements to creation of a reliable complex of information security of the industrial enterprise. Thereby it is given the chance of leveling of risks at early design stages of systems of protection, the organization and definition of necessary types of hardware-software complexes for future system.

  3. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    Science.gov (United States)

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.

  4. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  5. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  6. THE PARAMETER OPTIMIZATION MODEL OF INVESTMENT AND CONSTRUCTION PROJECTS AND MANAGERIAL FEASIBILITY OF THEIR BEHAVIOR

    Directory of Open Access Journals (Sweden)

    P. Ye. Uvarov

    2009-09-01

    Full Text Available In the article the basic problem of substantiation of parameters of optimization model of organizationaltechnological solutions for investment-building projects in the system of project management is considered.

  7. An automated optimization tool for high-dose-rate (HDR) prostate brachytherapy with divergent needle pattern

    Science.gov (United States)

    Borot de Battisti, M.; Maenhout, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; van Vulpen, M.; Moerland, M. A.

    2015-10-01

    Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm3to 23.3 cm3) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions.

  8. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  9. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  10. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    International Nuclear Information System (INIS)

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-01

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  11. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  12. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  13. Real-time parameter optimization based on neural network for smart injection molding

    Science.gov (United States)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  14. Network optimization including gas lift and network parameters under subsurface uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Baffoe, J.; Pajonk, O. [SPT Group GmbH, Hamburg (Germany); Badalov, H.; Huseynov, S. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE; Trick, M. [SPT Group, Calgary, AB (Canada)

    2013-08-01

    Optimization of oil and gas field production systems poses a great challenge to field development due to complex and multiple interactions between various operational design parameters and subsurface uncertainties. Conventional analytical methods are capable of finding local optima based on single deterministic models. They are less applicable for efficiently generating alternative design scenarios in a multi-objective context. Practical implementations of robust optimization workflows integrate the evaluation of alternative design scenarios and multiple realizations of subsurface uncertainty descriptions. Production or economic performance indicators such as NPV (Net Present Value) are linked to a risk-weighted objective function definition to guide the optimization processes. This work focuses on an integrated workflow using a reservoir-network simulator coupled to an optimization framework. The work will investigate the impact of design parameters while considering the physics of the reservoir, wells, and surface facilities. Subsurface uncertainties are described by well parameters such as inflow performance. Experimental design methods are used to investigate parameter sensitivities and interactions. Optimization methods are used to find optimal design parameter combinations which improve key performance indicators of the production network system. The proposed workflow will be applied to a representative oil reservoir coupled to a network which is modelled by an integrated reservoir-network simulator. Gas-lift will be included as an explicit measure to improve production. An objective function will be formulated for the net present value of the integrated system including production revenue and facility costs. Facility and gas lift design parameters are tuned to maximize NPV. Well inflow performance uncertainties are introduced with an impact on gas lift performance. Resulting variances on NPV are identified as a risk measure for the optimized system design. A

  15. Integrated optimization of location assignment and sequencing in multi-shuttle automated storage and retrieval systems under modified 2n-command cycle pattern

    Science.gov (United States)

    Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin

    2017-09-01

    This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.

  16. Optimization of Indoor Thermal Comfort Parameters with the Adaptive Network-Based Fuzzy Inference System and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Li

    2017-01-01

    Full Text Available The goal of this study is to improve thermal comfort and indoor air quality with the adaptive network-based fuzzy inference system (ANFIS model and improved particle swarm optimization (PSO algorithm. A method to optimize air conditioning parameters and installation distance is proposed. The methodology is demonstrated through a prototype case, which corresponds to a typical laboratory in colleges and universities. A laboratory model is established, and simulated flow field information is obtained with the CFD software. Subsequently, the ANFIS model is employed instead of the CFD model to predict indoor flow parameters, and the CFD database is utilized to train ANN input-output “metamodels” for the subsequent optimization. With the improved PSO algorithm and the stratified sequence method, the objective functions are optimized. The functions comprise PMV, PPD, and mean age of air. The optimal installation distance is determined with the hemisphere model. Results show that most of the staff obtain a satisfactory degree of thermal comfort and that the proposed method can significantly reduce the cost of building an experimental device. The proposed methodology can be used to determine appropriate air supply parameters and air conditioner installation position for a pleasant and healthy indoor environment.

  17. Optimization of parameters for semiempirical methods VI: more modifications to the NDDO approximations and re-optimization of parameters.

    Science.gov (United States)

    Stewart, James J P

    2013-01-01

    Modern semiempirical methods are of sufficient accuracy when used in the modeling of molecules of the same type as used as reference data in the parameterization. Outside that subset, however, there is an abundance of evidence that these methods are of very limited utility. In an attempt to expand the range of applicability, a new method called PM7 has been developed. PM7 was parameterized using experimental and high-level ab initio reference data, augmented by a new type of reference data intended to better define the structure of parameter space. The resulting method was tested by modeling crystal structures and heats of formation of solids. Two changes were made to the set of approximations: a modification was made to improve the description of noncovalent interactions, and two minor errors in the NDDO formalism were rectified. Average unsigned errors (AUEs) in geometry and ΔHf for PM7 were reduced relative to PM6; for simple gas-phase organic systems, the AUE in bond lengths decreased by about 5% and the AUE in ΔHf decreased by about 10%; for organic solids, the AUE in ΔHf dropped by 60% and the reduction was 33.3% for geometries. A two-step process (PM7-TS) for calculating the heights of activation barriers has been developed. Using PM7-TS, the AUE in the barrier heights for simple organic reactions was decreased from values of 12.6 kcal/mol(-1) in PM6 and 10.8 kcal/mol(-1) in PM7 to 3.8 kcal/mol(-1). The origins of the errors in NDDO methods have been examined, and were found to be attributable to inadequate and inaccurate reference data. This conclusion provides insight into how these methods can be improved.

  18. Optimizing human-system interface automation design based on a skill-rule-knowledge framework

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.

    2010-01-01

    This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.

  19. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  20. The primary ion source for construction and optimization of operation parameters

    International Nuclear Information System (INIS)

    Synowiecki, A.; Gazda, E.

    1986-01-01

    The construction of primary ion source for SIMS has been presented. The influence of individual operation parameters on the properties of ion source has been investigated. Optimization of these parameters has allowed to appreciate usefulness of the ion source for SIMS study. 14 refs., 8 figs., 2 tabs. (author)

  1. Experimental Verification of Statistically Optimized Parameters for Low-Pressure Cold Spray Coating of Titanium

    Directory of Open Access Journals (Sweden)

    Damilola Isaac Adebiyi

    2016-06-01

    Full Text Available The cold spray coating process involves many process parameters which make the process very complex, and highly dependent and sensitive to small changes in these parameters. This results in a small operational window of the parameters. Consequently, mathematical optimization of the process parameters is key, not only to achieving deposition but also improving the coating quality. This study focuses on the mathematical identification and experimental justification of the optimum process parameters for cold spray coating of titanium alloy with silicon carbide (SiC. The continuity, momentum and the energy equations governing the flow through the low-pressure cold spray nozzle were solved by introducing a constitutive equation to close the system. This was used to calculate the critical velocity for the deposition of SiC. In order to determine the input temperature that yields the calculated velocity, the distribution of velocity, temperature, and pressure in the cold spray nozzle were analyzed, and the exit values were predicted using the meshing tool of Solidworks. Coatings fabricated using the optimized parameters and some non-optimized parameters are compared. The coating of the CFD-optimized parameters yielded lower porosity and higher hardness.

  2. Study on feed forward neural network convex optimization for LiFePO4 battery parameters

    Science.gov (United States)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.

  3. Intermolecular Force Field Parameters Optimization for Computer Simulations of CH4 in ZIF-8

    Directory of Open Access Journals (Sweden)

    Phannika Kanthima

    2016-01-01

    Full Text Available The differential evolution (DE algorithm is applied for obtaining the optimized intermolecular interaction parameters between CH4 and 2-methylimidazolate ([C4N2H5]− using quantum binding energies of CH4-[C4N2H5]− complexes. The initial parameters and their upper/lower bounds are obtained from the general AMBER force field. The DE optimized and the AMBER parameters are then used in the molecular dynamics (MD simulations of CH4 molecules in the frameworks of ZIF-8. The results show that the DE parameters are better for representing the quantum interaction energies than the AMBER parameters. The dynamical and structural behaviors obtained from MD simulations with both sets of parameters are also of notable differences.

  4. Chemical Reactor Automation as a way to Optimize a Laboratory Scale Polymerization Process

    Science.gov (United States)

    Cruz-Campa, Jose L.; Saenz de Buruaga, Isabel; Lopez, Raymundo

    2004-10-01

    The automation of the registration and control of variables involved in a chemical reactor improves the reaction process by making it faster, optimized and without the influence of human error. The objective of this work is to register and control the involved variables (temperatures, reactive fluxes, weights, etc) in an emulsion polymerization reaction. The programs and control algorithms were developed in the language G in LabVIEW®. The designed software is able to send and receive RS232 codified data from the devices (pumps, temperature sensors, mixer, balances, and so on) to and from a personal Computer. The transduction from digital information to movement or measurement actions of the devices is done by electronic components included in the devices. Once the programs were done and proved, chemical reactions of emulsion polymerization were made to validate the system. Moreover, some advanced heat-estimation algorithms were implemented in order to know the heat caused by the reaction and the estimation and control of chemical variables in-line. All the information gotten from the reaction is stored in the PC. The information is then available and ready to use in any commercial data processor software. This work is now being used in a Research Center in order to make emulsion polymerizations under efficient and controlled conditions with reproducible results. The experiences obtained from this project may be used in the implementation of chemical estimation algorithms at pilot plant or industrial scale.

  5. An optimization design proposal of automated guided vehicles for mixed type transportation in hospital environments.

    Science.gov (United States)

    González, Domingo; Romero, Luis; Espinosa, María Del Mar; Domínguez, Manuel

    2017-01-01

    The aim of this paper is to present an optimization proposal in the automated guided vehicles design used in hospital logistics, as well as to analyze the impact of its implementation in a real environment. This proposal is based on the design of those elements that would allow the vehicles to deliver an extra cart by the towing method. So, the proposal intention is to improve the productivity and the performance of the current vehicles by using a transportation method of combined carts. The study has been developed following concurrent engineering premises from three different viewpoints. First, the sequence of operations has been described, and second, a proposal of design of the equipment has been undertaken. Finally, the impact of the proposal has been analyzed according to real data from the Hospital Universitario Rio Hortega in Valladolid (Spain). In this particular case, by the implementation of the analyzed proposal in the hospital a reduction of over 35% of the current time of use can be achieved. This result may allow adding new tasks to the vehicles, and according to this, both a new kind of vehicle and a specific module can be developed in order to get a better performance.

  6. An optimization design proposal of automated guided vehicles for mixed type transportation in hospital environments.

    Directory of Open Access Journals (Sweden)

    Domingo González

    Full Text Available The aim of this paper is to present an optimization proposal in the automated guided vehicles design used in hospital logistics, as well as to analyze the impact of its implementation in a real environment.This proposal is based on the design of those elements that would allow the vehicles to deliver an extra cart by the towing method. So, the proposal intention is to improve the productivity and the performance of the current vehicles by using a transportation method of combined carts.The study has been developed following concurrent engineering premises from three different viewpoints. First, the sequence of operations has been described, and second, a proposal of design of the equipment has been undertaken. Finally, the impact of the proposal has been analyzed according to real data from the Hospital Universitario Rio Hortega in Valladolid (Spain. In this particular case, by the implementation of the analyzed proposal in the hospital a reduction of over 35% of the current time of use can be achieved. This result may allow adding new tasks to the vehicles, and according to this, both a new kind of vehicle and a specific module can be developed in order to get a better performance.

  7. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  8. Optimization of machining parameters of turning operations based on multi performance criteria

    Directory of Open Access Journals (Sweden)

    N.K.Mandal

    2013-01-01

    Full Text Available The selection of optimum machining parameters plays a significant role to ensure quality of product, to reduce the manufacturing cost and to increase productivity in computer controlled manufacturing process. For many years, multi-objective optimization of turning based on inherent complexity of process is a competitive engineering issue. This study investigates multi-response optimization of turning process for an optimal parametric combination to yield the minimum power consumption, surface roughness and frequency of tool vibration using a combination of a Grey relational analysis (GRA. Confirmation test is conducted for the optimal machining parameters to validate the test result. Various turning parameters, such as spindle speed, feed and depth of cut are considered. Experiments are designed and conducted based on full factorial design of experiment.

  9. Prediction Model of Battery State of Charge and Control Parameter Optimization for Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2015-07-01

    Full Text Available This paper presents the construction of a battery state of charge (SOC prediction model and the optimization method of the said model to appropriately control the number of parameters in compliance with the SOC as the battery output objectives. Research Centre for Electrical Power and Mechatronics, Indonesian Institute of Sciences has tested its electric vehicle research prototype on the road, monitoring its voltage, current, temperature, time, vehicle velocity, motor speed, and SOC during the operation. Using this experimental data, the prediction model of battery SOC was built. Stepwise method considering multicollinearity was able to efficiently develops the battery prediction model that describes the multiple control parameters in relation to the characteristic values such as SOC. It was demonstrated that particle swarm optimization (PSO succesfully and efficiently calculated optimal control parameters to optimize evaluation item such as SOC based on the model.

  10. Optimization of parameters for the inline-injection system at Brookhaven Accelerator Test Facility

    International Nuclear Information System (INIS)

    Parsa, Z.; Ko, S.K.

    1995-01-01

    We present some of our parameter optimization results utilizing code PARMLEA, for the ATF Inline-Injection System. The new solenoid-Gun-Solenoid -- Drift-Linac Scheme would improve the beam quality needed for FEL and other experiments at ATF as compared to the beam quality of the original design injection system. To optimize the gain in the beam quality we have considered various parameters including the accelerating field gradient on the photoathode, the Solenoid field strengths, separation between the gun and entrance to the linac as well as the (type size) initial charge distributions. The effect of the changes in the parameters on the beam emittance is also given

  11. Warpage improvement on wheel caster by optimizing the process parameters using genetic algorithm (GA)

    Science.gov (United States)

    Safuan, N. S.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    In injection moulding process, the defects will always encountered and affected the final product shape and functionality. This study is concerning on minimizing warpage and optimizing the process parameter of injection moulding part. Apart from eliminating product wastes, this project also giving out best recommended parameters setting. This research studied on five parameters. The optimization showed that warpage have been improved 42.64% from 0.6524 mm to 0.30879 mm in Autodesk Moldflow Insight (AMI) simulation result and Genetic Algorithm (GA) respectively.

  12. Nonlinear Time Series Prediction Using LS-SVM with Chaotic Mutation Evolutionary Programming for Parameter Optimization

    International Nuclear Information System (INIS)

    Xu Ruirui; Chen Tianlun; Gao Chengfeng

    2006-01-01

    Nonlinear time series prediction is studied by using an improved least squares support vector machine (LS-SVM) regression based on chaotic mutation evolutionary programming (CMEP) approach for parameter optimization. We analyze how the prediction error varies with different parameters (σ, γ) in LS-SVM. In order to select appropriate parameters for the prediction model, we employ CMEP algorithm. Finally, Nasdaq stock data are predicted by using this LS-SVM regression based on CMEP, and satisfactory results are obtained.

  13. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    Science.gov (United States)

    Dees, Patrick D.; Zwack, Mathew R.

    2017-01-01

    optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.

  14. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  15. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    Directory of Open Access Journals (Sweden)

    Huanqing Cui

    2017-03-01

    Full Text Available Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  16. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    Science.gov (United States)

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  17. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  18. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    International Nuclear Information System (INIS)

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed

  19. SV-AUTOPILOT: optimized, automated construction of structural variation discovery and benchmarking pipelines.

    Science.gov (United States)

    Leung, Wai Yi; Marschall, Tobias; Paudel, Yogesh; Falquet, Laurent; Mei, Hailiang; Schönhuth, Alexander; Maoz Moss, Tiffanie Yael

    2015-03-25

    Many tools exist to predict structural variants (SVs), utilizing a variety of algorithms. However, they have largely been developed and tested on human germline or somatic (e.g. cancer) variation. It seems appropriate to exploit this wealth of technology available for humans also for other species. Objectives of this work included: a) Creating an automated, standardized pipeline for SV prediction. b) Identifying the best tool(s) for SV prediction through benchmarking. c) Providing a statistically sound method for merging SV calls. The SV-AUTOPILOT meta-tool platform is an automated pipeline for standardization of SV prediction and SV tool development in paired-end next-generation sequencing (NGS) analysis. SV-AUTOPILOT comes in the form of a virtual machine, which includes all datasets, tools and algorithms presented here. The virtual machine easily allows one to add, replace and update genomes, SV callers and post-processing routines and therefore provides an easy, out-of-the-box environment for complex SV discovery tasks. SV-AUTOPILOT was used to make a direct comparison between 7 popular SV tools on the Arabidopsis thaliana genome using the Landsberg (Ler) ecotype as a standardized dataset. Recall and precision measurements suggest that Pindel and Clever were the most adaptable to this dataset across all size ranges while Delly performed well for SVs larger than 250 nucleotides. A novel, statistically-sound merging process, which can control the false discovery rate, reduced the false positive rate on the Arabidopsis benchmark dataset used here by >60%. SV-AUTOPILOT provides a meta-tool platform for future SV tool development and the benchmarking of tools on other genomes using a standardized pipeline. It optimizes detection of SVs in non-human genomes using statistically robust merging. The benchmarking in this study has demonstrated the power of 7 different SV tools for analyzing different size classes and types of structural variants. The optional merge

  20. Quantitative traits for the tail suspension test: automation, optimization, and BXD RI mapping.

    Science.gov (United States)

    Lad, Heena V; Liu, Lin; Payá-Cano, José L; Fernandes, Cathy; Schalkwyk, Leonard C

    2007-07-01

    Immobility in the tail suspension test (TST) is considered a model of despair in a stressful situation, and acute treatment with antidepressants reduces immobility. Inbred strains of mouse exhibit widely differing baseline levels of immobility in the TST and several quantitative trait loci (QTLs) have been nominated. The labor of manual scoring and various scoring criteria make obtaining robust data and comparisons across different laboratories problematic. Several studies have validated strain gauge and video analysis methods by comparison with manual scoring. We set out to find objective criteria for automated scoring parameters that maximize the biological information obtained, using a video tracking system on tapes of tail suspension tests of 24 lines of the BXD recombinant inbred panel and the progenitor strains C57BL/6J and DBA/2J. The maximum genetic effect size is captured using the highest time resolution and a low mobility threshold. Dissecting the trait further by comparing genetic association of multiple measures reveals good evidence for loci involved in immobility on chromosomes 4 and 15. These are best seen when using a high threshold for immobility, despite the overall better heritability at the lower threshold. A second trial of the test has greater duration of immobility and a completely different genetic profile. Frequency of mobility is also an independent phenotype, with a distal chromosome 1 locus.

  1. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  2. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in

  3. Chaotic invasive weed optimization algorithm with application to parameter estimation of chaotic systems

    International Nuclear Information System (INIS)

    Ahmadi, Mohamadreza; Mojallali, Hamed

    2012-01-01

    Highlights: ► A new meta-heuristic optimization algorithm. ► Integration of invasive weed optimization and chaotic search methods. ► A novel parameter identification scheme for chaotic systems. - Abstract: This paper introduces a novel hybrid optimization algorithm by taking advantage of the stochastic properties of chaotic search and the invasive weed optimization (IWO) method. In order to deal with the weaknesses associated with the conventional method, the proposed chaotic invasive weed optimization (CIWO) algorithm is presented which incorporates the capabilities of chaotic search methods. The functionality of the proposed optimization algorithm is investigated through several benchmark multi-dimensional functions. Furthermore, an identification technique for chaotic systems based on the CIWO algorithm is outlined and validated by several examples. The results established upon the proposed scheme are also supplemented which demonstrate superior performance with respect to other conventional methods.

  4. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    Science.gov (United States)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-06-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  5. Selection of Near Optimal Laser Cutting Parameters in CO2 Laser Cutting by the Taguchi Method

    Directory of Open Access Journals (Sweden)

    Miloš MADIĆ

    2013-12-01

    Full Text Available Identification of laser cutting conditions that are insensitive to parameter variations and noise is of great importance. This paper demonstrates the application of Taguchi method for optimization of surface roughness in CO2 laser cutting of stainless steel. The laser cutting experiment was planned and conducted according to the Taguchi’s experimental design using the L27 orthogonal array. Four laser cutting parameters such as laser power, cutting speed, assist gas pressure, and focus position were considered in the experiment. Using the analysis of means and analysis of variance, the significant laser cutting parameters were identified, and subsequently the optimal combination of laser cutting parameter levels was determined. The results showed that the cutting speed is the most significant parameter affecting the surface roughness whereas the influence of the assist gas pressure can be neglected. It was observed, however, that interaction effects have predominant influence over the main effects on the surface roughness.

  6. Application of Factorial Design for Gas Parameter Optimization in CO2 Laser Welding

    DEFF Research Database (Denmark)

    Gong, Hui; Dragsted, Birgitte; Olsen, Flemming Ove

    1997-01-01

    The effect of different gas process parameters involved in CO2 laser welding has been studied by applying two-set of three-level complete factorial designs. In this work 5 gas parameters, gas type, gas flow rate, gas blowing angle, gas nozzle diameter, gas blowing point-offset, are optimized...... to be a very useful tool for parameter optimi-zation in laser welding process. Keywords: CO2 laser welding, gas parameters, factorial design, Analysis of Variance........ The bead-on-plate welding specimens are evaluated by a number of quality char-acteristics, such as the penetration depth and the seam width. The significance of the gas pa-rameters and their interactions are based on the data found by the Analysis of Variance-ANOVA. This statistic methodology is proven...

  7. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  8. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  9. Grinding Parameter Optimization of Ultrasound-Aided Electrolytic in Process Dressing for Finishing Nanocomposite Ceramics

    Directory of Open Access Journals (Sweden)

    Fan Chen

    2016-01-01

    Full Text Available In order to achieve the precision and efficient processing of nanocomposite ceramics, the ultrasound-aided electrolytic in process dressing method was proposed. But how to realize grinding parameter optimization, that is, the maximum processing efficiency, on the premise of the assurance of best workpiece quality is a problem that needs to be solved urgently. Firstly, this research investigated the influence of grinding parameters on material removal rate and critical ductile depth, and their mathematic models based on the existing models were developed to simulate the material removal process. Then, on the basis of parameter sensitivity analysis based on partial derivative, the sensitivity models of material removal rates on grinding parameter were established and computed quantitatively by MATLAB, and the key grinding parameter for optimal grinding process was found. Finally, the theoretical analyses were verified by experiments: the material removal rate increases with the increase of grinding parameters, including grinding depth (ap, axial feeding speed (fa, workpiece speed (Vw, and wheel speed (Vs; the parameter sensitivity of material removal rate was in a descending order as ap>fa>Vw>Vs; the most sensitive parameter (ap was optimized and it was found that the better machining result has been obtained when ap was about 3.73 μm.

  10. Fault detection of feed water treatment process using PCA-WD with parameter optimization.

    Science.gov (United States)

    Zhang, Shirong; Tang, Qian; Lin, Yu; Tang, Yuling

    2017-05-01

    Feed water treatment process (FWTP) is an essential part of utility boilers; and fault detection is expected for its reliability improvement. Classical principal component analysis (PCA) has been applied to FWTPs in our previous work; however, the noises of T 2 and SPE statistics result in false detections and missed detections. In this paper, Wavelet denoise (WD) is combined with PCA to form a new algorithm, (PCA-WD), where WD is intentionally employed to deal with the noises. The parameter selection of PCA-WD is further formulated as an optimization problem; and PSO is employed for optimization solution. A FWTP, sustaining two 1000MW generation units in a coal-fired power plant, is taken as a study case. Its operation data is collected for following verification study. The results show that the optimized WD is effective to restrain the noises of T 2 and SPE statistics, so as to improve the performance of PCA-WD algorithm. And, the parameter optimization enables PCA-WD to get its optimal parameters in an automatic way rather than on individual experience. The optimized PCA-WD is further compared with classical PCA and sliding window PCA (SWPCA), in terms of four cases as bias fault, drift fault, broken line fault and normal condition, respectively. The advantages of the optimized PCA-WD, against classical PCA and SWPCA, is finally convinced with the results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  12. Evaluation and optimization of footwear comfort parameters using finite element analysis and a discrete optimization algorithm

    Science.gov (United States)

    Papagiannis, P.; Azariadis, P.; Papanikos, P.

    2017-10-01

    Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.

  13. Automated Modal Parameter Estimation for Operational Modal Analysis of Large Systems

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice

    2007-01-01

    In this paper the problems of doing automatic modal parameter extraction and how to account for large number of data to process are considered. Two different approaches for obtaining the modal parameters automatically using OMA are presented: The Frequency Domain Decomposition (FDD) technique and...

  14. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  15. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    Energy Technology Data Exchange (ETDEWEB)

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  16. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Rao, R. Venkata; Rai, Dhiraj P. [Sardar Vallabhbhai National Institute of Technology, Gujarat (India)

    2017-05-15

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  17. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R. Venkata; Rai, Dhiraj P.

    2017-01-01

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  18. Themoeconomic optimization of triple pressure heat recovery steam generator operating parameters for combined cycle plants

    Directory of Open Access Journals (Sweden)

    Mohammd Mohammed S.

    2015-01-01

    Full Text Available The aim of this work is to develop a method for optimization of operating parameters of a triple pressure heat recovery steam generator. Two types of optimization: (a thermodynamic and (b thermoeconomic were preformed. The purpose of the thermodynamic optimization is to maximize the efficiency of the plant. The selected objective for this purpose is minimization of the exergy destruction in the heat recovery steam generator (HRSG. The purpose of the thermoeconomic optimization is to decrease the production cost of electricity. Here, the total annual cost of HRSG, defined as a sum of annual values of the capital costs and the cost of the exergy destruction, is selected as the objective function. The optimal values of the most influencing variables are obtained by minimizing the objective function while satisfying a group of constraints. The optimization algorithm is developed and tested on a case of CCGT plant with complex configuration. Six operating parameters were subject of optimization: pressures and pinch point temperatures of every three (high, intermediate and low pressure steam stream in the HRSG. The influence of these variables on the objective function and production cost are investigated in detail. The differences between results of thermodynamic and the thermoeconomic optimization are discussed.

  19. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  20. Optimization of process parameters in welding of dissimilar steels using robot TIG welding

    Science.gov (United States)

    Navaneeswar Reddy, G.; VenkataRamana, M.

    2018-03-01

    Robot TIG welding is a modern technique used for joining two work pieces with high precision. Design of Experiments is used to conduct experiments by varying weld parameters like current, wire feed and travelling speed. The welding parameters play important role in joining of dissimilar stainless steel SS 304L and SS430. In this work, influences of welding parameter on Robot TIG Welded specimens are investigated using Response Surface Methodology. The Micro Vickers hardness tests of the weldments are measured. The process parameters are optimized to maximize the hardness of the weldments.

  1. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  2. Laser Welding Process Parameters Optimization Using Variable-Fidelity Metamodel and NSGA-II

    Directory of Open Access Journals (Sweden)

    Wang Chaochao

    2017-01-01

    Full Text Available An optimization methodology based on variable-fidelity (VF metamodels and nondominated sorting genetic algorithm II (NSGA-II for laser bead-on-plate welding of stainless steel 316L is presented. The relationships between input process parameters (laser power, welding speed and laser focal position and output responses (weld width and weld depth are constructed by VF metamodels. In VF metamodels, the information from two levels fidelity models are integrated, in which the low-fidelity model (LF is finite element simulation model that is used to capture the general trend of the metamodels, and high-fidelity (HF model which from physical experiments is used to ensure the accuracy of metamodels. The accuracy of the VF metamodel is verified by actual experiments. To slove the optimization problem, NSGA-II is used to search for multi-objective Pareto optimal solutions. The results of verification experiments show that the obtained optimal parameters are effective and reliable.

  3. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  4. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    International Nuclear Information System (INIS)

    Belwanshi, Vinod; Topkar, Anita

    2016-01-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  5. Optimization of TRPO process parameters for americium extraction from high level waste

    International Nuclear Information System (INIS)

    Chen Jing; Wang Jianchen; Song Chongli

    2001-01-01

    The numerical calculations for Am multistage fractional extraction by trialkyl phosphine oxide (TRPO) were verified by a hot test. 1750L/t-U high level waste (HLW) was used as the feed to the TRPO process. The analysis used the simple objective function to minimize the total waste content in the TRPO process streams. Some process parameters were optimized after other parameters were selected. The optimal process parameters for Am extraction by TRPO are: 10 stages for extraction and 2 stages for scrubbing; a flow rate ratio of 0.931 for extraction and 4.42 for scrubbing; nitric acid concentration of 1.35 mol/L for the feed and 0.5 mol/L for the scrubbing solution. Finally, the nitric acid and Am concentration profiles in the optimal TRPO extraction process are given

  6. Thermo-mechanical simulation and parameters optimization for beam blank continuous casting

    International Nuclear Information System (INIS)

    Chen, W.; Zhang, Y.Z.; Zhang, C.J.; Zhu, L.G.; Lu, W.G.; Wang, B.X.; Ma, J.H.

    2009-01-01

    The objective of this work is to optimize the process parameters of beam blank continuous casting in order to ensure high quality and productivity. A transient thermo-mechanical finite element model is developed to compute the temperature and stress profile in beam blank continuous casting. By comparing the calculated data with the metallurgical constraints, the key factors causing defects of beam blank can be found out. Then based on the subproblem approximation method, an optimization program is developed to search out the optimum cooling parameters. Those optimum parameters can make it possible to run the caster at its maximum productivity, minimum cost and to reduce the defects. Now, online verifying of this optimization project has been put in practice, which can prove that it is very useful to control the actual production

  7. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    Science.gov (United States)

    Belwanshi, Vinod; Topkar, Anita

    2016-05-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  8. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  9. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  10. Software integration for automated stability analysis and design optimization of a bearingless rotor blade

    Science.gov (United States)

    Gunduz, Mustafa Emre

    Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used

  11. PI Stabilization for Congestion Control of AQM Routers with Tuning Parameter Optimization

    Directory of Open Access Journals (Sweden)

    S. Chebli

    2016-09-01

    Full Text Available In this paper, we consider the problem of stabilizing network using a new proportional- integral (PI based congestion controller in active queue management (AQM router; with appropriate model approximation in the first order delay systems, we seek a stability region of the controller by using the Hermite- Biehler theorem, which isapplicable to quasipolynomials. A Genetic Algorithm technique is employed to derive optimal or near optimal PI controller parameters.

  12. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    Energy Technology Data Exchange (ETDEWEB)

    Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Aoki, Yuriko [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)

    2016-07-14

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  13. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    International Nuclear Information System (INIS)

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-01-01

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  14. A multicriteria framework with voxel-dependent parameters for radiotherapy treatment plan optimization

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Uribe-Sanchez, Andres F.; Li, Nan; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To establish a new mathematical framework for radiotherapy treatment optimization with voxel-dependent optimization parameters. Methods: In the treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for a set of the objective functions associated to the organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. Furthermore, questions about the objective function selection and parameter adjustment to assure Pareto optimality as well as the relationship between the optimal solutions obtained from the organ-based and voxel-based models remain unanswered. To answer these questions, the authors establish in this work a new mathematical framework equipped with two theorems. Results: The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the impact of using different objective functions on plan qualities and Pareto surfaces. The main discoveries are threefold: (1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model since the Pareto surface is independent of the objective function selection and the entire Pareto surface could be generated as long as the objective function satisfies certain mathematical conditions; (2) All Pareto solutions generated by the organ-based model with different objective functions are parts of a unique Pareto surface generated by the voxel-based model with any appropriate objective function; (3) A much larger Pareto surface is explored by adjusting voxel-dependent parameters than by adjusting organ-dependent parameters, possibly

  15. Multiscale analysis of the correlation of processing parameters on viscidity of composites fabricated by automated fiber placement

    Science.gov (United States)

    Han, Zhenyu; Sun, Shouzheng; Fu, Yunzhong; Fu, Hongya

    2017-10-01

    Viscidity is an important physical indicator for assessing fluidity of resin that is beneficial to contact resin with the fibers effectively and reduce manufacturing defects during automated fiber placement (AFP) process. However, the effect of processing parameters on viscidity evolution is rarely studied during AFP process. In this paper, viscidities under different scales are analyzed based on multi-scale analysis method. Firstly, viscous dissipation energy (VDE) within meso-unit under different processing parameters is assessed by using finite element method (FEM). According to multi-scale energy transfer model, meso-unit energy is used as the boundary condition for microscopic analysis. Furthermore, molecular structure of micro-system is built by molecular dynamics (MD) method. And viscosity curves are then obtained by integrating stress autocorrelation function (SACF) with time. Finally, the correlation characteristics of processing parameters to viscosity are revealed by using gray relational analysis method (GRAM). A group of processing parameters is found out to achieve the stability of viscosity and better fluidity of resin.

  16. Study of dose calculation and beam parameters optimization with genetic algorithm in IMRT

    International Nuclear Information System (INIS)

    Chen Chaomin; Tang Mutao; Zhou Linghong; Lv Qingwen; Wang Zhuoyu; Chen Guangjie

    2006-01-01

    Objective: To study the construction of dose calculation model and the method of automatic beam parameters selection in IMRT. Methods: The three-dimension convolution dose calculation model of photon was constructed with the methods of Fast Fourier Transform. The objective function based on dose constrain was used to evaluate the fitness of individuals. The beam weights were optimized with genetic algorithm. Results: After 100 iterative analyses, the treatment planning system produced highly conformal and homogeneous dose distributions. Conclusion: the throe-dimension convolution dose calculation model of photon gave more accurate results than the conventional models; genetic algorithm is valid and efficient in IMRT beam parameters optimization. (authors)

  17. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  18. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  19. Optimization of Cutting Parameters of the Haynes 718 Nickel Alloy With Gas CO2 Laser

    Directory of Open Access Journals (Sweden)

    Jana PETRŮ

    2011-06-01

    Full Text Available This article deals with the application of laser technology and the optimization of parameters in the area of nickel alloy laser cutting intended for application in the aircraft industry. The main goal is to outline possibilities of use of the laser technology, primarily its application in the area of 3D material cutting. This experiment is focused on the optimization of cutting parameters of the Haynes 718 alloy with a gas CO2 laser. Originating cuts are evaluated primarily from the point of view of cut quality and accompanying undesirable phenomena occurring in the process of cutting. In conclusion the results achieved in the metallographic laboratory are described and analyzed.

  20. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of a measured program devoted to parameter identification of structural dynamic systems is considered, the design problem is formulated as an optimization problem due to minimize the total expected cost of the measurement program. All the calculations are based on a priori knowledge...... and engineering judgement. One of the contribution of the approach is that the optimal nmber of sensors can be estimated. This is sown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program for estimating the modal damping parameters...

  1. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilcox, Ian Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandoval, Andrew J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reza, Shahed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction and portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.

  2. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    Science.gov (United States)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  3. A Particle Swarm Optimization of Natural Ventilation Parameters in a Greenhouse with Continuous Roof Vents

    Directory of Open Access Journals (Sweden)

    Abdelhafid HASNI

    2009-03-01

    Full Text Available Although natural ventilation plays an important role in the affecting greenhouse climate, as defined by temperature, humidity and CO2 concentration, particularly in Mediterranean countries, little information and data are presently available on full-scale greenhouse ventilation mechanisms. In this paper, we present a new method for selecting the parameters based on a particle swarm optimization (PSO algorithm which optimize the choice of parameters by minimizing a cost function. The simulator was based on a published model with some minor modifications as we were interested in the parameter of ventilation. The function is defined by a reduced model that could be used to simulate and predict the greenhouse environment, as well as the tuning methods to compute their parameters. This study focuses on the dynamic behavior of the inside air temperature and humidity during ventilation. Our approach is validated by comparison with some experimental results. Various experimental techniques were used to make full-scale measurements of the air exchange rate in a 400 m2 plastic greenhouse. The model which we propose based on natural ventilation parameters optimized by a particle swarm optimization was compared with the measurements results.

  4. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  5. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    Science.gov (United States)

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  6. Factorization and the synthesis of optimal feedback gains for distributed parameter systems

    Science.gov (United States)

    Milman, Mark H.; Scheid, Robert E.

    1990-01-01

    An approach based on Volterra factorization leads to a new methodology for the analysis and synthesis of the optimal feedback gain in the finite-time linear quadratic control problem for distributed parameter systems. The approach circumvents the need for solving and analyzing Riccati equations and provides a more transparent connection between the system dynamics and the optimal gain. The general results are further extended and specialized for the case where the underlying state is characterized by autonomous differential-delay dynamics. Numerical examples are given to illustrate the second-order convergence rate that is derived for an approximation scheme for the optimal feedback gain in the differential-delay problem.

  7. Optimized convective transport with automated pressure control in on-line postdilution hemodiafiltration.

    Science.gov (United States)

    Joyeux, V; Sijpkens, Y; Haddj-Elmrabet, A; Bijvoet, A J; Nilsson, L-G

    2008-11-01

    In a stable patient population we evaluated on-line postdilution hemodiafiltration (HDF) on the incremental improvement in blood purification versus high-flux HD, using the same dialyzer and blood flow rate. For HDF we used a new way of controlling HDF treatments based on the concept of constant pressure control where the trans-membrane pressure is automatically set by the machine using a feedback loop on the achieved filtration (HDF UC). We enrolled 20 patients on on-line HDF treatment and during a 4-week study period recorded key treatment parameters in HDF UC. For one mid-week study treatment performed in HD and one midweek HDF UC treatment we sampled blood and spent dialysate to evaluate the removal of small- and middle-sized solutes. We achieved 18+/-3 liters of ultrafiltration in four-hour HDF UC treatments, corresponding to 27+/-3% of the treated blood volume. That percentage varied by patient hematocrit level. The ultrafiltration amounted to 49+/-4% of the estimated plasma water volume treated. We noted few machine alarms. For beta2m and factor D the effective reduction in plasma level by HDF (76+/-6% and 43+/-9%, respectively) was significantly greater than in HD, and a similar relation was seen in mass recovered in spent dialysate. Small solute removal was similar in HDF and HD. Albumin loss was low. The additional convective transport provided by on-line HDF significantly improved the removal of middle molecules when all other treatment settings were equal. Using the automated pressure control mode in HDF, the convective volume depended on the blood volume processed and the patient hematocrit level.

  8. Optimization of basic parameters of cyclic operation of underground gas storages

    Directory of Open Access Journals (Sweden)

    Віктор Олександрович Заєць

    2015-04-01

    Full Text Available The problem of optimization of process parameters of cyclic operation of underground gas storages in gas mode is determined in the article. The target function is defined, expressing necessary capacity of compressor station for gas injection in the storage. Its minimization will find the necessary technological parameters, such as flow and reservoir pressure change over time. Limitations and target function are reduced to a linear form. Solution of problems is made by the simplex method

  9. Crystallization of SHARPIN using an automated two-dimensional grid screen for optimization.

    Science.gov (United States)

    Stieglitz, Benjamin; Rittinger, Katrin; Haire, Lesley F

    2012-07-01

    An N-terminal fragment of human SHARPIN was recombinantly expressed in Escherichia coli, purified and crystallized. Crystals suitable for X-ray diffraction were obtained by a one-step optimization of seed dilution and protein concentration using a two-dimensional grid screen. The crystals belonged to the primitive tetragonal space group P4(3)2(1)2, with unit-cell parameters a = b = 61.55, c = 222.81 Å. Complete data sets were collected from native and selenomethionine-substituted protein crystals at 100 K to 2.6 and 2.0 Å resolution, respectively.

  10. Crystallization of SHARPIN using an automated two-dimensional grid screen for optimization

    International Nuclear Information System (INIS)

    Stieglitz, Benjamin; Rittinger, Katrin; Haire, Lesley F.

    2012-01-01

    The expression, purification and crystallization of an N-terminal fragment of SHARPIN are reported. Diffraction-quality crystals were obtained using a two-dimensional grid-screen seeding technique. An N-terminal fragment of human SHARPIN was recombinantly expressed in Escherichia coli, purified and crystallized. Crystals suitable for X-ray diffraction were obtained by a one-step optimization of seed dilution and protein concentration using a two-dimensional grid screen. The crystals belonged to the primitive tetragonal space group P4 3 2 1 2, with unit-cell parameters a = b = 61.55, c = 222.81 Å. Complete data sets were collected from native and selenomethionine-substituted protein crystals at 100 K to 2.6 and 2.0 Å resolution, respectively

  11. A parameter estimation for DC servo motor by using optimization process

    International Nuclear Information System (INIS)

    Arjoni Amir

    2010-01-01

    Modeling and simulation parameters of DC servo motor using Matlab Simulink software have been done. The objective to define the DC servo motor parameter estimation is to get DC servo motor parameter values (B, La, Ra, Km, J) which are significant value that can be used for actuation process of control systems. In the analysis of control systems DC the servo motor expressed by transfer function equation to make faster to be analyzed as a component of the actuator. To obtain the data model parameters and initial conditions of the DC servo motors is then carried out the processor modeling and simulation in which the DC servo motor combined with other components. To obtain preliminary data of the DC servo motor parameters as estimated venue, it is obtained from the data factory of the DC servo motor. The initial data parameters of the DC servo motor are applied for the optimization process by using nonlinear least square algorithm and minimize the cost function value so that the DC servo motors parameter values are obtained significantly. The result of the optimization process of the DC servo motor parameter values are B = 0.039881, J= 1.2608e-007, Km = 0.069648, La = 2.3242e-006 and Ra = 1.8837. (author)

  12. Optimization of the Automated Spray Layer-by-Layer Technique for Thin Film Deposition

    Science.gov (United States)

    2010-06-01

    air- pumped spray-paint cans 17,18 to fully automated systems using high pressure gas .7’ 19 This work uses the automated spray system previously...spray solutions were delivered by ultra high purity nitrogen gas (AirGas) regulated to 25psi, except when examining air pressure effects . The PAH solution...polyelectrolyte solution feed tube, the resulting Venturi effect causes the liquid solution to be drawn up into the airbrush nozzle, where it is

  13. Automation of reverse engineering process in aircraft modeling and related optimization problems

    Science.gov (United States)

    Li, W.; Swetits, J.

    1994-01-01

    During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for

  14. Optimization of the dressing parameters in cylindrical grinding based on a generalized utility function

    Science.gov (United States)

    Aleksandrova, Irina

    2016-01-01

    The existing studies, concerning the dressing process, focus on the major influence of the dressing conditions on the grinding response variables. However, the choice of the dressing conditions is often made, based on the experience of the qualified staff or using data from reference books. The optimal dressing parameters, which are only valid for the particular methods and dressing and grinding conditions, are also used. The paper presents a methodology for optimization of the dressing parameters in cylindrical grinding. The generalized utility function has been chosen as an optimization parameter. It is a complex indicator determining the economic, dynamic and manufacturing characteristics of the grinding process. The developed methodology is implemented for the dressing of aluminium oxide grinding wheels by using experimental diamond roller dressers with different grit sizes made of medium- and high-strength synthetic diamonds type ??32 and ??80. To solve the optimization problem, a model of the generalized utility function is created which reflects the complex impact of dressing parameters. The model is built based on the results from the conducted complex study and modeling of the grinding wheel lifetime, cutting ability, production rate and cutting forces during grinding. They are closely related to the dressing conditions (dressing speed ratio, radial in-feed of the diamond roller dresser and dress-out time), the diamond roller dresser grit size/grinding wheel grit size ratio, the type of synthetic diamonds and the direction of dressing. Some dressing parameters are determined for which the generalized utility function has a maximum and which guarantee an optimum combination of the following: the lifetime and cutting ability of the abrasive wheels, the tangential cutting force magnitude and the production rate of the grinding process. The results obtained prove the possibility of control and optimization of grinding by selecting particular dressing

  15. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  16. Optimization of the Machining parameter of LM6 Alminium alloy in CNC Turning using Taguchi method

    Science.gov (United States)

    Arunkumar, S.; Muthuraman, V.; Baskaralal, V. P. M.

    2017-03-01

    Due to widespread use of highly automated machine tools in the industry, manufacturing requires reliable models and methods for the prediction of output performance of machining process. In machining of parts, surface quality is one of the most specified customer requirements. In order for manufactures to maximize their gains from utilizing CNC turning, accurate predictive models for surface roughness must be constructed. The prediction of optimum machining conditions for good surface finish plays an important role in process planning. This work deals with the study and development of a surface roughness prediction model for machining LM6 aluminum alloy. Two important tools used in parameter design are Taguchi orthogonal arrays and signal to noise ratio (S/N). Speed, feed, depth of cut and coolant are taken as process parameter at three levels. Taguchi’s parameters design is employed here to perform the experiments based on the various level of the chosen parameter. The statistical analysis results in optimum parameter combination of speed, feed, depth of cut and coolant as the best for obtaining good roughness for the cylindrical components. The result obtained through Taguchi is confirmed with real time experimental work.

  17. Parameter optimization method for longitudinal vibration absorber of ship shaft system

    Directory of Open Access Journals (Sweden)

    LIU Jinlin

    2017-05-01

    Full Text Available The longitudinal vibration of the ship shaft system is the one of the most important factors of hull stern vibration, and it can be effectively minimized by installing a longitudinal vibration absorber. In this way, the vibration and noise of ships can be brought under control. However, the parameters of longitudinal vibration absorbers have a great influence on the vibration characteristics of the shaft system. As such, a certain shafting testing platform was studied as the object on which a finite model was built, and the relationship between longitudinal stiffness and longitudinal vibration in the shaft system was analyzed in a straight alignment state. Furthermore, a longitudinal damping model of the shaft system was built in which the parameters of the vibration absorber were non-dimensionalized, the weight of the vibration absorber was set as a constant, and an optimizing algorithm was used to calculate the optimized stiffness and damping coefficient of the vibration absorber. Finally, the longitudinal vibration frequency response of the shafting testing platform before and after optimizing the parameters of the longitudinal vibration absorber were compared, and the results indicated that the longitudinal vibration of the shafting testing platform was decreased effectively, which suggests that it could provide a theoretical foundation for the parameter optimization of longitudinal vibration absorbers.

  18. High-resolution MRI of the labyrinth. Optimization of scan parameters with 3D-FSE

    International Nuclear Information System (INIS)

    Sakata, Motomichi; Harada, Kuniaki; Shirase, Ryuji; Kumagai, Akiko; Ogasawara, Masashi

    2005-01-01

    The aim of our study was to optimize the parameters of high-resolution MRI of the labyrinth with a 3D fast spin-echo (3D-FSE) sequence. We investigated repetition time (TR), echo time (TE), Matrix, field of view (FOV), and coil selection in terms of CNR (contrast-to-noise ratio) and SNR (signal-to-noise ratio) by comparing axial images and/or three-dimensional images. The optimal 3D-FSE sequence parameters were as follows: 1.5 Tesla MR unit (Signa LX, GE Medical Systems), 3D-FSE sequence, dual 3-inch surface coil, acquisition time=12.08 min, TR=5000 msec, TE=300 msec, 3 number of excitations (NEX), FOV=12 cm, matrix=256 x 256, slice thickness=0.5 mm/0.0 sp, echo train=64, bandwidth=±31.5 kHz. High-resolution MRI of the labyrinth using the optimized 3D-FSE sequence parameters permits visualization of important anatomic details (such as scala tympani and scala vestibuli), making it possible to determine inner ear anomalies and the patency of cochlear turns. To obtain excellent heavily T2-weighted axial and three-dimensional images in the labyrinth, high CNR, SNR, and spatial resolution are significant factors at the present time. Furthermore, it is important not only to optimize the scan parameters of 3D-FSE but also to select an appropriate coil for high-resolution MRI of the labyrinth. (author)

  19. Optimization of Temperature Schedule Parameters on Heat Supply in Power-and-Heat Supply Systems

    Directory of Open Access Journals (Sweden)

    V. A. Sednin

    2009-01-01

    Full Text Available The paper considers problems concerning optimization of a temperature schedule in the district heating systems with steam-turbine thermal power stations having average initial steam parameters. It has been shown in the paper that upkeeping of an optimum network water temperature permits to increase an energy efficiency of heat supply due to additional systematic saving of fuel. 

  20. Fast reactor parameter optimization taking into account changes in fuel charge type during reactor operation time

    International Nuclear Information System (INIS)

    Afrin, B.A.; Rechnov, A.V.; Usynin, G.B.

    1987-01-01

    The formulation and solution of optimization problem for parameters determining the layout of the central part of sodium cooled power reactor taking into account possible changes in fuel charge type during reactor operation time are performed. The losses under change of fuel composition type for two reactor modifications providing for minimum doubling time for oxide and carbide fuels respectively, are estimated

  1. Is transverse feedback necessary for the SSC emittance preservation? (Vibration noise analysis and feedback parameters optimization)

    International Nuclear Information System (INIS)

    Parkhomchuk, V.V.; Shiltsev, V.D.

    1993-06-01

    The paper considers the Superconducting Super Collider (SSC) site ground motion measurements as well as data from accelerators worldwide about noises that worsen beam performance. Unacceptably fast emittance growth due to these noises is predicted for the SSC. A transverse feedback system was found to be the only satisfactory alternative to prevent emittance decay. Optimization of the primary feedback parameters was done

  2. Bounds on Entanglement Dimensions and Quantum Graph Parameters via Noncommutative Polynomial Optimization

    NARCIS (Netherlands)

    Gribling, Sander; de Laat, David; Laurent, Monique

    2017-01-01

    In this paper we study bipartite quantum correlations using techniques from tracial polynomial optimization. We construct a hierarchy of semidefinite programming lower bounds on the minimal entanglement dimension of a bipartite correlation. This hierarchy converges to a new parameter: the minimal

  3. Cellular Neural Networks: A genetic algorithm for parameters optimization in artificial vision applications

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Innovazione; Zanela, A. [Rome Univ. `La Sapienza` (Italy). Dipt. di Fisica

    1997-03-01

    An optimization method for some of the CNN`s (Cellular Neural Network) parameters, based on evolutionary strategies, is proposed. The new class of feedback template found is more effective in extracting features from the images that an autonomous vehicle acquires, than in the previous CNN`s literature.

  4. Optimization of WEDM process parameters using deep cryo-treated Inconel 718 as work material

    Directory of Open Access Journals (Sweden)

    Bijaya Bijeta Nayak

    2016-03-01

    Full Text Available The present work proposes an experimental investigation and optimization of various process parameters during taper cutting of deep cryo-treated Inconel 718 in wire electrical discharge machining process. Taguchi's design of experiment is used to gather information regarding the process with less number of experimental runs considering six input parameters such as part thickness, taper angle, pulse duration, discharge current, wire speed and wire tension. Since traditional Taguchi method fails to optimize multiple performance characteristics, maximum deviation theory is applied to convert multiple performance characteristics into an equivalent single performance characteristic. Due to the complexity and non-linearity involved in this process, good functional relationship with reasonable accuracy between performance characteristics and process parameters is difficult to obtain. To address this issue, the present study proposes artificial neural network (ANN model to determine the relationship between input parameters and performance characteristics. Finally, the process model is optimized to obtain a best parametric combination by a new meta-heuristic approach known as bat algorithm. The results of the proposed algorithm show that the proposed method is an effective tool for simultaneous optimization of performance characteristics during taper cutting in WEDM process.

  5. An analysis to optimize the process parameters of friction stir welded ...

    African Journals Online (AJOL)

    The friction stir welding (FSW) of steel is a challenging task. Experiments are conducted here, with a tool having a conical pin of 0.4mm clearance. The process parameters are optimized by using the Taguchi technique based on Taguchi's L9 orthogonal array. Experiments have been conducted based on three process ...

  6. Cellular Neural Networks: A genetic algorithm for parameters optimization in artificial vision applications

    International Nuclear Information System (INIS)

    Taraglio, S.; Zanela, A.

    1997-03-01

    An optimization method for some of the CNN's (Cellular Neural Network) parameters, based on evolutionary strategies, is proposed. The new class of feedback template found is more effective in extracting features from the images that an autonomous vehicle acquires, than in the previous CNN's literature

  7. Multi Objective Optimization of Weld Parameters of Boiler Steel Using Fuzzy Based Desirability Function

    Directory of Open Access Journals (Sweden)

    M. Satheesh

    2014-01-01

    Full Text Available The high pressure differential across the wall of pressure vessels is potentially dangerous and has caused many fatal accidents in the history of their development and operation. For this reason the structural integrity of weldments is critical to the performance of pressure vessels. In recent years much research has been conducted to the study of variations in welding parameters and consumables on the mechanical properties of pressure vessel steel weldments to optimize weld integrity and ensure pressure vessels are safe. The quality of weld is a very important working aspect for the manufacturing and construction industries. Because of high quality and reliability, Submerged Arc Welding (SAW is one of the chief metal joining processes employed in industry. This paper addresses the application of desirability function approach combined with fuzzy logic analysis to optimize the multiple quality characteristics (bead reinforcement, bead width, bead penetration and dilution of submerged arc welding process parameters of SA 516 Grade 70 steels(boiler steel. Experiments were conducted using Taguchi’s L27 orthogonal array with varying the weld parameters of welding current, arc voltage, welding speed and electrode stickout. By analyzing the response table and response graph of the fuzzy reasoning grade, optimal parameters were obtained. Solutions from this method can be useful for pressure vessel manufacturers and operators to search an optimal solution of welding condition.

  8. Optimization of CVD parameters for long ZnO NWs grown on ITO

    Indian Academy of Sciences (India)

    The optimization of chemical vapour deposition (CVD) parameters for long and vertically aligned (VA) ZnO nanowires (NWs) were investigated. Typical ZnO NWs as a single crystal grown on indium tin oxide (ITO)-coated glass substrate were successfully synthesized. First, the conducted side of ITO–glass substrate was ...

  9. Parameter Optimization for Quantitative Signal-Concentration Mapping Using Spoiled Gradient Echo MRI

    Directory of Open Access Journals (Sweden)

    Gasser Hathout

    2012-01-01

    Full Text Available Rationale and Objectives. Accurate signal to tracer concentration maps are critical to quantitative MRI. The purpose of this study was to evaluate and optimize spoiled gradient echo (SPGR MR sequences for the use of gadolinium (Gd-DTPA as a kinetic tracer. Methods. Water-gadolinium phantoms were constructed for a physiologic range of gadolinium concentrations. Observed and calculated SPGR signal to concentration curves were generated. Using a percentage error determination, optimal pulse parameters for signal to concentration mapping were obtained. Results. The accuracy of the SPGR equation is a function of the chosen MR pulse parameters, particularly the time to repetition (TR and the flip angle (FA. At all experimental values of TR, increasing FA decreases the ratio between observed and calculated signals. Conversely, for a constant FA, increasing TR increases this ratio. Using optimized pulse parameter sets, it is possible to achieve excellent accuracy (approximately 5% over a physiologic range of concentration tracer concentrations. Conclusion. Optimal pulse parameter sets exist and their use is essential for deriving accurate signal to concentration curves in quantitative MRI.

  10. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  11. Optimization of control parameters of a hot cold controller by means of Simplex type methods

    Science.gov (United States)

    Porte, C.; Caron-Poussin, M.; Carot, S.; Couriol, C.; Moreno, M. Martin; Delacroix, A.

    1997-01-01

    This paper describes a hot/cold controller for regulating crystallization operations. The system was identified with a common method (the Broida method) and the parameters were obtained by the Ziegler-Nichols method. The paper shows that this empirical method will only allow a qualitative approach to regulation and that, in some instances, the parameters obtained are unreliable and therefore cannot be used to cancel variations between the set point and the actual values. Optimization methods were used to determine the regulation parameters and solve this identcation problem. It was found that the weighted centroid method was the best one. PMID:18924791

  12. Process parameter optimization based on principal components analysis during machining of hardened steel

    Directory of Open Access Journals (Sweden)

    Suryakant B. Chandgude

    2015-09-01

    Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.

  13. Parameter optimization for reproducible cardiac 1 H-MR spectroscopy at 3 Tesla.

    Science.gov (United States)

    de Heer, Paul; Bizino, Maurice B; Lamb, Hildo J; Webb, Andrew G

    2016-11-01

    To optimize data acquisition parameters in cardiac proton MR spectroscopy, and to evaluate the intra- and intersession variability in myocardial triglyceride content. Data acquisition parameters at 3 Tesla (T) were optimized and reproducibility measured using, in total, 49 healthy subjects. The signal-to-noise-ratio (SNR) and the variance in metabolite amplitude between averages were measured for: (i) global versus local power optimization; (ii) static magnetic field (B 0 ) shimming performed during free-breathing or within breathholds; (iii) post R-wave peak measurement times between 50 and 900 ms; (iv) without respiratory compensation, with breathholds and with navigator triggering; and (v) frequency selective excitation, Chemical Shift Selective (CHESS) and Multiply Optimized Insensitive Suppression Train (MOIST) water suppression techniques. Using the optimized parameters intra- and intersession myocardial triglyceride content reproducibility was measured. Two cardiac proton spectra were acquired with the same parameters and compared (intrasession reproducibility) after which the subject was removed from the scanner and placed back in the scanner and a third spectrum was acquired which was compared with the first measurement (intersession reproducibility). Local power optimization increased SNR on average by 22% compared with global power optimization (P = 0.0002). The average linewidth was not significantly different for pencil beam B 0 shimming using free-breathing or breathholds (19.1 Hz versus 17.5 Hz; P = 0.15). The highest signal stability occurred at a cardiac trigger delay around 240 ms. The mean amplitude variation was significantly lower for breathholds versus free-breathing (P = 0.03) and for navigator triggering versus free-breathing (P = 0.03) as well as for navigator triggering versus breathhold (P = 0.02). The mean residual water signal using CHESS (1.1%, P = 0.01) or MOIST (0.7%, P = 0.01) water suppression was significantly lower than using

  14. Combination of Compensations and Multi-Parameter Coil for Efficiency Optimization of Inductive Power Transfer System

    Directory of Open Access Journals (Sweden)

    Guozhen Hu

    2017-12-01

    Full Text Available A loosely coupled inductive power transfer (IPT system for industrial track applications has been researched in this paper. The IPT converter using primary Inductor-Capacitor-Inductor (LCL network and secondary parallel-compensations is analyzed combined coil design for optimal operating efficiency. Accurate mathematical analytical model and expressions of self-inductance and mutual inductance are proposed to achieve coil parameters. Furthermore, the optimization process is performed combined with the proposed resonant compensations and coil parameters. The results are evaluated and discussed using finite element analysis (FEA. Finally, an experimental prototype is constructed to verify the proposed approach and the experimental results show that the optimization can be better applied to industrial track distributed IPT system.

  15. Q-Learning Multi-Objective Sequential Optimal Sensor Parameter Weights

    Directory of Open Access Journals (Sweden)

    Raquel Cohen

    2016-04-01

    Full Text Available The goal of our solution is to deliver trustworthy decision making analysis tools which evaluate situations and potential impacts of such decisions through acquired information and add efficiency for continuing mission operations and analyst information.We discuss the use of cooperation in modeling and simulation and show quantitative results for design choices to resource allocation. The key contribution of our paper is to combine remote sensing decision making with Nash Equilibrium for sensor parameter weighting optimization. By calculating all Nash Equilibrium possibilities per period, optimization of sensor allocation is achieved for overall higher system efficiency. Our tool provides insight into what are the most important or optimal weights for sensor parameters and can be used to efficiently tune those weights.

  16. Optimal selection of LQR parameter using AIS for LFC in a multi-area power system

    Directory of Open Access Journals (Sweden)

    Muhammad Abdillah

    2016-12-01

    Full Text Available This paper proposes a method to optimize the parameter of the linear quadratic regulator (LQR using artificial immune system (AIS via clonal selection. The parameters of LQR utilized in this paper are the weighting matrices Q and R. The optimal LQR control for load frequency control (LFC is installed on each area as a decentralized control scheme. The aim of this control design is to improve the dynamic performance of LFC automatically when unexpected load change occurred on power system network. The change of load demands 0.01 p.u used as a disturbance is applied to LFC in Area 1. The proposed method guarantees the stability of the overall closed-loop system. The simulation result shows that the proposed method can reduce the overshoot of the system and compress the time response to steady-state which is better compared to trial error method (TEM and without optimal LQR control.

  17. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  18. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    Science.gov (United States)

    Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.

    2013-12-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value

  19. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    International Nuclear Information System (INIS)

    Zhang, L; Zhuge, W L; Liu, S J; Zhang, Y J; Peng, J

    2013-01-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (W R and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β 6 , the exit tip radius R 6t and hub radius R 6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency η ts , and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency η ts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the

  20. A new framework for analysing automated acoustic species-detection data: occupancy estimation and optimization of recordings post-processing

    Science.gov (United States)

    Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.

    2018-01-01

    The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.

  1. A shot parameter specification subsystem for automated control of PBFA II accelerator shots

    International Nuclear Information System (INIS)

    Spiller, J.L.

    1987-01-01

    The author reports on the shot parameter specification subsystem (SPSS), an integral part of the automatic control system developed for the Particle Beam Fusion Accelerator II (PBFA II). This system has been designed to fully utilize the accelerator by tailoring shot parameters to the needs of the experimenters. The SPSS is the key to this flexibility. Automatic systems will be required on many pulsed power machines for the fastest turnaround, the highest reliability, and most cost effective operation. These systems will require the flexibility and the ease of use that is part of the SPSS. The author discusses how the PBFA II control system has proved to be an effective modular system, flexible enough to meet the demands of both the fast track construction of PBFA II and the control needs of Hermes III. This system is expected to meet the demands of most future machine changes

  2. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton–Vargas two-dimensional snowplow model

    International Nuclear Information System (INIS)

    Auluck, S K H

    2014-01-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton–Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance. (paper)

  3. Optimization of process parameters for a quasi-continuous tablet coating system using design of experiments.

    Science.gov (United States)

    Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah

    2011-03-01

    The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists

  4. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    Science.gov (United States)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  5. Multi-parameter geometrical scaledown study for energy optimization of MTJ and related spintronics nanodevices

    Science.gov (United States)

    Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.

    The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.

  6. Optimization of injection molding process parameters for a plastic cell phone housing component

    Science.gov (United States)

    Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya

    2016-11-01

    To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.

  7. Sensitivity of Calibrated Parameters and Water Resource Estimates on Different Objective Functions and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Delaram Houshmand Kouchi

    2017-05-01

    Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.

  8. Model Optimization Identification Method Based on Closed-loop Operation Data and Process Characteristics Parameters

    Directory of Open Access Journals (Sweden)

    Zhiqiang GENG

    2014-01-01

    Full Text Available Output noise is strongly related to input in closed-loop control system, which makes model identification of closed-loop difficult, even unidentified in practice. The forward channel model is chosen to isolate disturbance from the output noise to input, and identified by optimization the dynamic characteristics of the process based on closed-loop operation data. The characteristics parameters of the process, such as dead time and time constant, are calculated and estimated based on the PI/PID controller parameters and closed-loop process input/output data. And those characteristics parameters are adopted to define the search space of the optimization identification algorithm. PSO-SQP optimization algorithm is applied to integrate the global search ability of PSO with the local search ability of SQP to identify the model parameters of forward channel. The validity of proposed method has been verified by the simulation. The practicability is checked with the PI/PID controller parameter turning based on identified forward channel model.

  9. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm

    International Nuclear Information System (INIS)

    Oliva, Diego; Abd El Aziz, Mohamed; Ella Hassanien, Aboul

    2017-01-01

    Highlights: •We modify the whale algorithm using chaotic maps. •We apply a chaotic algorithm to estimate parameter of photovoltaic cells. •We perform a study of chaos in whale algorithm. •Several comparisons and metrics support the experimental results. •We test the method with data from real solar cells. -- Abstract: The using of solar energy has been increased since it is a clean source of energy. In this way, the design of photovoltaic cells has attracted the attention of researchers over the world. There are two main problems in this field: having a useful model to characterize the solar cells and the absence of data about photovoltaic cells. This situation even affects the performance of the photovoltaic modules (panels). The characteristics of the current vs. voltage are used to describe the behavior of solar cells. Considering such values, the design problem involves the solution of the complex non-linear and multi-modal objective functions. Different algorithms have been proposed to identify the parameters of the photovoltaic cells and panels. Most of them commonly fail in finding the optimal solutions. This paper proposes the Chaotic Whale Optimization Algorithm (CWOA) for the parameters estimation of solar cells. The main advantage of the proposed approach is using the chaotic maps to compute and automatically adapt the internal parameters of the optimization algorithm. This situation is beneficial in complex problems, because along the iterative process, the proposed algorithm improves their capabilities to search for the best solution. The modified method is able to optimize complex and multimodal objective functions. For example, the function for the estimation of parameters of solar cells. To illustrate the capabilities of the proposed algorithm in the solar cell design, it is compared with other optimization methods over different datasets. Moreover, the experimental results support the improved performance of the proposed approach

  10. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  11. Development of a parameter optimization technique for the design of automatic control systems

    Science.gov (United States)

    Whitaker, P. H.

    1977-01-01

    Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.

  12. Efficiency Optimization Control of IPM Synchronous Motor Drives with Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Sadegh Vaez-Zadeh

    2011-04-01

    Full Text Available This paper describes an efficiency optimization control method for high performance interior permanent magnet synchronous motor drives with online estimation of motor parameters. The control system is based on an input-output feedback linearization method which provides high performance control and simultaneously ensures the minimization of the motor losses. The controllable electrical loss can be minimized by the optimal control of the armature current vector. It is shown that parameter variations except at near the nominal conditions have undesirable effect on the controller performance. Therefore, a parameter estimation method based on the second method of Lyapunov is presented which guarantees the stability and convergence of the estimation. The extensive simulation results show the feasibility of the proposed controller and observer and their desirable performances.

  13. Optimization of process parameter for graft copolymerization of glycidyl methacrylate onto delignified banana fibers

    International Nuclear Information System (INIS)

    Selambakkannu, S.; Nor Azillah Fatimah Othman; Siti Fatahiyah Mohamad

    2016-01-01

    This paper focused on pre-treated banana fibers as a trunk polymer for optimization of radiation-induced graft copolymerization process parameters. Pre-treated banana fiber was grafted with glycidyl methacrylate (GMA) via electron beam irradiation. Optimization of grafting parameters in term of grafting yield was analyzed at numerous radiation dose, monomer concentration and reaction time. Grafting yield had been calculated gravimetrically against all the process parameters. The grafting yield at 40 kGy had increases from 14 % to 22.5 % at 1 h and 24 h of reaction time respectively. Grafting yield at 1 % of GMA was about 58 % and it increases to 187 % at 3 % GMA. The grafting of GMA onto pre-treated banana fibers confirmed with the characterization using FTIR, SEM and TGA. Grafting of GMA onto pre-treated fibers was successfully carried out and it was confirmed by the results obtained via the characterization. (author)

  14. Optimization of cryogenic cooled EDM process parameters using grey relational analysis

    International Nuclear Information System (INIS)

    Kumar, S Vinoth; Kumar, M Pradeep

    2014-01-01

    This paper presents an experimental investigation on cryogenic cooling of liquid nitrogen (LN 2 ) copper electrode in the electrical discharge machining (EDM) process. The optimization of the EDM process parameters, such as the electrode environment (conventional electrode and cryogenically cooled electrode in EDM), discharge current, pulse on time, gap voltage on material removal rate, electrode wear, and surface roughness on machining of AlSiCp metal matrix composite using multiple performance characteristics on grey relational analysis was investigated. The L 18 orthogonal array was utilized to examine the process parameters, and the optimal levels of the process parameters were identified through grey relational analysis. Experimental data were analyzed through analysis of variance. Scanning electron microscopy analysis was conducted to study the characteristics of the machined surface.

  15. Optimal allocation of sensors for state estimation of distributed parameter systems

    International Nuclear Information System (INIS)

    Sunahara, Yoshifumi; Ohsumi, Akira; Mogami, Yoshio.

    1978-01-01

    The purpose of this paper is to present a method for finding the optimal allocation of sensors for state estimation of linear distributed parameter systems. This method is based on the criterion that the error covariance associated with the state estimate becomes minimal with respect to the allocation of the sensors. A theorem is established, giving the sufficient condition for optimizing the allocation of sensors to make minimal the error covariance approximated by a modal expansion. The remainder of this paper is devoted to illustrate important phases of the general theory of the optimal measurement allocation problem. To do this, several examples are demonstrated, including extensive discussions on the mutual relation between the optimal allocation and the dynamics of sensors. (author)

  16. OPTIMIZATION OF TRANSESTERIFICATION PARAMETERS FOR OPTIMAL BIODIESEL YIELD FROM CRUDE JATROPHA OIL USING A NEWLY SYNTHESIZED SEASHELL CATALYST

    Directory of Open Access Journals (Sweden)

    A. N. R. REDDY

    2017-10-01

    Full Text Available Heterogeneous catalysts are promising catalysts for optimal biodiesel yield from transesterification of vegetable oils. In this work calcium oxide (CaO heterogeneous catalyst was synthesized from Polymedosa erosa seashell. Calcination was carried out at 900ºC for 2h and characterized using Fourier transform infrared spectroscopy. Catalytic efficiency of CaO was testified in transesterification of crude Jatropha oil (CJO. A response surface methodology (RSM based on five-level-two-factor central composite design (CCD was employed to optimize two critical transesterification parameters catalyst concentration to pretreated CJO (0.01-0.03 w/w % and the reaction time (90 min - 150 min. A JB yield of 96.48% was estimated at 0.023 w/w% catalyst and 125.76 min reaction using response optimizer. The legitimacy of the predicted model was verified through the experiments. The validation experiments conformed a yield of JB 96.4%±0.01% as optimal at 0.023 w/w% catalyst to pretreated oil ratio and 126 min reaction time.

  17. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  18. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    Science.gov (United States)

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  19. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-01-01

    Full Text Available Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO based support vector machine (SVM classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR method with a pseudorandom binary sequence (PRBS stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  20. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  1. Comparison of direct machine parameter optimization versus fluence optimization with sequential sequencing in IMRT of hypopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Dobler, Barbara; Pohl, Fabian; Bogner, Ludwig; Koelbl, Oliver

    2007-01-01

    To evaluate the effects of direct machine parameter optimization in the treatment planning of intensity-modulated radiation therapy (IMRT) for hypopharyngeal cancer as compared to subsequent leaf sequencing in Oncentra Masterplan v1.5. For 10 hypopharyngeal cancer patients IMRT plans were generated in Oncentra Masterplan v1.5 (Nucletron BV, Veenendal, the Netherlands) for a Siemens Primus linear accelerator. For optimization the dose volume objectives (DVO) for the planning target volume (PTV) were set to 53 Gy minimum dose and 59 Gy maximum dose, in order to reach a dose of 56 Gy to the average of the PTV. For the parotids a median dose of 22 Gy was allowed and for the spinal cord a maximum dose of 35 Gy. The maximum DVO to the external contour of the patient was set to 59 Gy. The treatment plans were optimized with the direct machine parameter optimization ('Direct Step & Shoot', DSS, Raysearch Laboratories, Sweden) newly implemented in Masterplan v1.5 and the fluence modulation technique ('Intensity Modulation', IM) which was available in previous versions of Masterplan already. The two techniques were compared with regard to compliance to the DVO, plan quality, and number of monitor units (MU) required per fraction dose. The plans optimized with the DSS technique met the DVO for the PTV significantly better than the plans optimized with IM (p = 0.007 for the min DVO and p < 0.0005 for the max DVO). No significant difference could be observed for compliance to the DVO for the organs at risk (OAR) (p > 0.05). Plan quality, target coverage and dose homogeneity inside the PTV were superior for the plans optimized with DSS for similar dose to the spinal cord and lower dose to the normal tissue. The mean dose to the parotids was lower for the plans optimized with IM. Treatment plan efficiency was higher for the DSS plans with (901 ± 160) MU compared to (1151 ± 157) MU for IM (p-value < 0.05). Renormalization of the IM plans to the mean of the

  2. Application-Oriented Optimal Shift Schedule Extraction for a Dual-Motor Electric Bus with Automated Manual Transmission

    Directory of Open Access Journals (Sweden)

    Mingjie Zhao

    2018-02-01

    Full Text Available The conventional battery electric buses (BEBs have limited potential to optimize the energy consumption and reach a better dynamic performance. A practical dual-motor equipped with 4-speed Automated Manual Transmission (AMT propulsion system is proposed, which can eliminate the traction interruption in conventional AMT. A discrete model of the dual-motor-AMT electric bus (DMAEB is built and used to optimize the gear shift schedule. Dynamic programming (DP algorithm is applied to find the optimal results where the efficiency and shift time of each gear are considered to handle the application problem of global optimization. A rational penalty factor and a proper shift time delay based on bench test results are set to reduce the shift frequency by 82.5% in Chinese-World Transient Vehicle Cycle (C-WTVC. Two perspectives of applicable shift rule extraction methods, i.e., the classification method based on optimal operating points and clustering method based on optimal shifting points, are explored and compared. Eventually, the hardware-in-the-loop (HIL simulation results demonstrate that the proposed structure and extracted shift schedule can realize a significant improvement in reducing energy loss by 20.13% compared to traditional empirical strategies.

  3. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  4. Dynamic optimization of a biped model: Energetic walking gaits with different mechanical and gait parameters

    Directory of Open Access Journals (Sweden)

    Kang An

    2015-05-01

    Full Text Available Energy consumption is one of the problems for bipedal robots walking. For the purpose of studying the parameter effects on the design of energetic walking bipeds with strong adaptability, we use a dynamic optimization method on our new walking model to first investigate the effects of the mechanical parameters, including mass and length distribution, on the walking efficiency. Then, we study the energetic walking gait features with the combinations of walking speed and step length. Our walking model is designed upon Srinivasan’s model. Dynamic optimization is used for a free search with minimal constraints. The results show that the cost of transport of a certain gait increases with the increase in the mass and length distribution parameters, except for that the cost of transport decreases with big length distribution parameter and long step length. We can also find a corresponding range of walking speed and step length, in which the variation in one of the two parameters has no obvious effect on the cost of transport. With fixed mechanical parameters, the cost of transport increases with the increase in the walking speed. There is a speed–step length relationship for walking with minimal cost of transport. The hip torque output strategy is adjusted in two situations to meet the walking requirements.

  5. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    Science.gov (United States)

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the

  6. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  7. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    Science.gov (United States)

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and

  8. Parameters-tuning of PID controller for automatic voltage regulators using the African buffalo optimization

    Science.gov (United States)

    Mohmad Kahar, Mohd Nizam; Noraziah, A.

    2017-01-01

    In this paper, an attempt is made to apply the African Buffalo Optimization (ABO) to tune the parameters of a PID controller for an effective Automatic Voltage Regulator (AVR). Existing metaheuristic tuning methods have been proven to be quite successful but there were observable areas that need improvements especially in terms of the system’s gain overshoot and steady steady state errors. Using the ABO algorithm where each buffalo location in the herd is a candidate solution to the Proportional-Integral-Derivative parameters was very helpful in addressing these two areas of concern. The encouraging results obtained from the simulation of the PID Controller parameters-tuning using the ABO when compared with the performance of Genetic Algorithm PID (GA-PID), Particle-Swarm Optimization PID (PSO-PID), Ant Colony Optimization PID (ACO-PID), PID, Bacteria-Foraging Optimization PID (BFO-PID) etc makes ABO-PID a good addition to solving PID Controller tuning problems using metaheuristics. PMID:28441390

  9. Fractional Order Controller Designing with Firefly Algorithm and Parameter Optimization for Hydroturbine Governing System

    Directory of Open Access Journals (Sweden)

    Li Junyi

    2015-01-01

    Full Text Available A fractional order PID (FOPID controller, which is suitable for control system designing for being insensitive to the variation in system parameter, is proposed for hydroturbine governing system in the paper. The simultaneous optimization for several parameters of controller, that is, Ki, Kd, Kp, λ, and μ, is done by a recently developed metaheuristic nature-inspired algorithm, namely, the firefly algorithm (FA, for the first time, where the selecting, moving, attractiveness behavior between fireflies and updating of brightness, and decision range are studied in detail to simulate the optimization process. Investigation clearly reveals the advantages of the FOPID controller over the integer controllers in terms of reduced oscillations and settling time. The present work also explores the superiority of FA based optimization technique in finding optimal parameters of the controller. Further, convergence characteristics of the FA are compared with optimum integer order PID (IOPID controller to justify its efficiency. What is more, analysis confirms the robustness of FOPID controller under isolated load operation conditions.

  10. [Simulation of vegetation indices optimizing under retrieval of vegetation biochemical parameters based on PROSPECT + SAIL model].

    Science.gov (United States)

    Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng

    2012-12-01

    This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.

  11. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    Science.gov (United States)

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  12. Automated detection of sleep apnea from electrocardiogram signals using nonlinear parameters

    International Nuclear Information System (INIS)

    Acharya, U Rajendra; Faust, Oliver; Chua, Eric Chern-Pin; Lim, Teik-Cheng; Lim, Liang Feng Benjamin

    2011-01-01

    Sleep apnoea is a very common sleep disorder which can cause symptoms such as daytime sleepiness, irritability and poor concentration. To monitor patients with this sleeping disorder we measured the electrical activity of the heart. The resulting electrocardiography (ECG) signals are both non-stationary and nonlinear. Therefore, we used nonlinear parameters such as approximate entropy, fractal dimension, correlation dimension, largest Lyapunov exponent and Hurst exponent to extract physiological information. This information was used to train an artificial neural network (ANN) classifier to categorize ECG signal segments into one of the following groups: apnoea, hypopnoea and normal breathing. ANN classification tests produced an average classification accuracy of 90%; specificity and sensitivity were 100% and 95%, respectively. We have also proposed unique recurrence plots for the normal, hypopnea and apnea classes. Detecting sleep apnea with this level of accuracy can potentially reduce the need of polysomnography (PSG). This brings advantages to patients, because the proposed system is less cumbersome when compared to PSG

  13. Determination of radial profile of ICF hot spot's state by multi-objective parameters optimization

    International Nuclear Information System (INIS)

    Dong Jianjun; Deng Bo; Cao Zhurong; Ding Yongkun; Jiang Shaoen

    2014-01-01

    A method using multi-objective parameters optimization is presented to determine the radial profile of hot spot temperature and density. And a parameter space which contain five variables: the temperatures at center and the interface of fuel and remain ablator, the maximum model density of remain ablator, the mass ratio of remain ablator to initial ablator and the position of interface between fuel and the remain ablator, is used to described the hot spot radial temperature and density. Two objective functions are set as the variances of normalized intensity profile from experiment X-ray images and the theory calculation. Another objective function is set as the variance of experiment average temperature of hot spot and the average temperature calculated by theoretical model. The optimized parameters are obtained by multi-objective genetic algorithm searching for the five dimension parameter space, thereby the optimized radial temperature and density profiles can be determined. The radial temperature and density profiles of hot spot by experiment data measured by KB microscope cooperating with X-ray film are presented. It is observed that the temperature profile is strongly correlated to the objective functions. (authors)

  14. Optimization of design and operating parameters in a pilot scale Jameson cell for slime coal cleaning

    Energy Technology Data Exchange (ETDEWEB)

    Hacifazlioglu, Hasan; Toroglu, Ihsan [Department of Mining Engineering, University of Karaelmas, 67100 (Turkey)

    2007-07-15

    The Jameson flotation cell has been commonly used to treat a variety of ores (lead, zinc, copper etc.), coal and industrial minerals at commercial scale since 1989. It is especially known to be highly efficient at fine and ultrafine coal recovery. However, although the Jameson cell has quite a simple structure, it may be largely inefficient if the design and operating parameters chosen are not appropriate. In this study, the design and operating parameters of a pilot scale Jameson cell were optimized to obtain a desired metallurgical performance in the slime coal flotation. The optimized design parameters are the nozzle type, the height of the nozzle above the pulp level, the downcomer diameter and the immersion depth of the downcomer. Among the operating parameters optimized are the collector dosage, the frother dosage, the percentage of solids and the froth height. In the optimum conditions, a clean coal with an ash content of 14.90% was obtained from the sample slime having 45.30% ash with a combustible recovery of 74.20%. In addition, a new type nozzle was developed for the Jameson cell, which led to an increase of about 9% in the combustible recovery value.

  15. Prediction and optimization of friction welding parameters for super duplex stainless steel (UNS S32760) joints

    International Nuclear Information System (INIS)

    Udayakumar, T.; Raja, K.; Afsal Husain, T.M.; Sathiya, P.

    2014-01-01

    Highlights: • Corrosion resistance and impact strength – predicted by response surface methodology. • Burn off length has highest significance on corrosion resistance. • Friction force is a strong determinant in changing impact strength. • Pareto front points generated by genetic algorithm aid to fix input control variable. • Pareto front will be a trade-off between corrosion resistance and impact strength. - Abstract: Friction welding finds widespread industrial use as a mass production process for joining materials. Friction welding process allows welding of several materials that are extremely difficult to fusion weld. Friction welding process parameters play a significant role in making good quality joints. To produce a good quality joint it is important to set up proper welding process parameters. This can be done by employing optimization techniques. This paper presents a multi objective optimization method for optimizing the process parameters during friction welding process. The proposed method combines the response surface methodology (RSM) with an intelligent optimization algorithm, i.e. genetic algorithm (GA). Corrosion resistance and impact strength of friction welded super duplex stainless steel (SDSS) (UNS S32760) joints were investigated considering three process parameters: friction force (F), upset force (U) and burn off length (B). Mathematical models were developed and the responses were adequately predicted. Direct and interaction effects of process parameters on responses were studied by plotting graphs. Burn off length has high significance on corrosion current followed by upset force and friction force. In the case of impact strength, friction force has high significance followed by upset force and burn off length. Multi objective optimization for maximizing the impact strength and minimizing the corrosion current (maximizing corrosion resistance) was carried out using GA with the RSM model. The optimization procedure resulted in

  16. An intelligent approach to optimize the EDM process parameters using utility concept and QPSO algorithm

    Directory of Open Access Journals (Sweden)

    Chinmaya P. Mohanty

    2017-04-01

    Full Text Available Although significant research has gone into the field of electrical discharge machining (EDM, analysis related to the machining efficiency of the process with different electrodes has not been adequately made. Copper and brass are frequently used as electrode materials but graphite can be used as a potential electrode material due to its high melting point temperature and good electrical conductivity. In view of this, the present work attempts to compare the machinability of copper, graphite and brass electrodes while machining Inconel 718 super alloy. Taguchi’s L27 orthogonal array has been employed to collect data for the study and analyze effect of machining parameters on performance measures. The important performance measures selected for this study are material removal rate, tool wear rate, surface roughness and radial overcut. Machining parameters considered for analysis are open circuit voltage, discharge current, pulse-on-time, duty factor, flushing pressure and electrode material. From the experimental analysis, it is observed that electrode material, discharge current and pulse-on-time are the important parameters for all the performance measures. Utility concept has been implemented to transform a multiple performance characteristics into an equivalent performance characteristic. Non-linear regression analysis is carried out to develop a model relating process parameters and overall utility index. Finally, the quantum behaved particle swarm optimization (QPSO and particle swarm optimization (PSO algorithms have been used to compare the optimal level of cutting parameters. Results demonstrate the elegance of QPSO in terms of convergence and computational effort. The optimal parametric setting obtained through both the approaches is validated by conducting confirmation experiments.

  17. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    Science.gov (United States)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations

  18. A Hybrid Least Square Support Vector Machine Model with Parameters Optimization for Stock Forecasting

    Directory of Open Access Journals (Sweden)

    Jian Chai

    2015-01-01

    Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.

  19. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  20. Multi-parameter optimization of a nanomagnetic system for spintronic applications

    Energy Technology Data Exchange (ETDEWEB)

    Morales Meza, Mishel [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Zubieta Rico, Pablo F. [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Centro de Investigación y de Estudios Avanzados del IPN (CINVESTAV) Querétaro, Libramiento Norponiente 2000, Fracc. Real de Juriquilla, 76230 Querétaro (Mexico); Horley, Paul P., E-mail: paul.horley@cimav.edu.mx [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Sukhov, Alexander [Institut für Physik, Martin-Luther Universität Halle-Wittenberg, 06120 Halle (Saale) (Germany); Vieira, Vítor R. [Centro de Física das Interacções Fundamentais (CFIF), Instituto Superior Técnico, Universidade Técnica de Lisboa, Avenida Rovisco Pais, 1049-001 Lisbon (Portugal)

    2014-11-15

    Magnetic properties of nano-particles feature many interesting physical phenomena that are essentially important for the creation of a new generation of spin-electronic devices. The magnetic stability of the nano-particles can be improved by formation of ordered particle arrays, which should be optimized over several parameters. Here we report successful optimization regarding inter-particle distance and applied field frequency allowing to obtain about three-times reduction of coercivity of a particle array compared to that of a single particle, which opens new perspectives for development of new spintronic devices.

  1. Multi-parameter optimization of a nanomagnetic system for spintronic applications

    International Nuclear Information System (INIS)

    Morales Meza, Mishel; Zubieta Rico, Pablo F.; Horley, Paul P.; Sukhov, Alexander; Vieira, Vítor R.

    2014-01-01

    Magnetic properties of nano-particles feature many interesting physical phenomena that are essentially important for the creation of a new generation of spin-electronic devices. The magnetic stability of the nano-particles can be improved by formation of ordered particle arrays, which should be optimized over several parameters. Here we report successful optimization regarding inter-particle distance and applied field frequency allowing to obtain about three-times reduction of coercivity of a particle array compared to that of a single particle, which opens new perspectives for development of new spintronic devices

  2. The optimization of the nonlinear parameters in the transcorrelated method: the hydrogen molecule

    International Nuclear Information System (INIS)

    Huggett, J.P.; Armour, E.A.G.

    1976-01-01

    The nonlinear parameters in a transcorrelated calculation of the groundstate energy and wavefunction of the hydrogen molecule are optimized using the method of Boys and Handy (Proc. R. Soc. A.; 309:195 and 209, 310:43 and 63, 311:309 (1969)). The method gives quite accurate results in all cases and in some cases the results are highly accurate. This is the first time the method has been applied to the optimization of a term in the correlation function which depends linearly on the interelectronic distance. (author)

  3. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  4. Optimization of the Process Parameters for Controlling Residual Stress and Distortion in Friction Stir Welding

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Schmidt, Henrik Nikolaj Blicher; Hattel, Jesper Henri

    2008-01-01

    In the present paper, numerical optimization of the process parameters, i.e. tool rotation speed and traverse speed, aiming minimization of the two conflicting objectives, i.e. the residual stresses and welding time, subjected to process-specific thermal constraints in friction stir welding......, is investigated. The welding process is simulated in 2-dimensions with a sequentially coupled transient thermo-mechanical model using ANSYS. The numerical optimization problem is implemented in modeFRONTIER and solved using the Multi-Objective Genetic Algorithm (MOGA-II). An engineering-wise evaluation or ranking...

  5. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of measurement programs devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensors can be estimated. This is shown in a numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program...

  6. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1993-01-01

    The design of a measurement program devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensory can be estimated. This is shown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement...

  7. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1991-01-01

    The design of a measurement program devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost, i.e. the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contributions of the approach is that the optimal number of sensors can be estimated. This is shown in a numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement...

  8. Optimized and Automated Radiosynthesis of [18F]DHMT for Translational Imaging of Reactive Oxygen Species with Positron Emission Tomography

    Directory of Open Access Journals (Sweden)

    Wenjie Zhang

    2016-12-01

    Full Text Available Reactive oxygen species (ROS play important roles in cell signaling and homeostasis. However, an abnormally high level of ROS is toxic, and is implicated in a number of diseases. Positron emission tomography (PET imaging of ROS can assist in the detection of these diseases. For the purpose of clinical translation of [18F]6-(4-((1-(2-fluoroethyl-1H-1,2,3-triazol-4-ylmethoxyphenyl-5-methyl-5,6-dihydrophenanthridine-3,8-diamine ([18F]DHMT, a promising ROS PET radiotracer, we first manually optimized the large-scale radiosynthesis conditions and then implemented them in an automated synthesis module. Our manual synthesis procedure afforded [18F]DHMT in 120 min with overall radiochemical yield (RCY of 31.6% ± 9.3% (n = 2, decay-uncorrected and specific activity of 426 ± 272 GBq/µmol (n = 2. Fully automated radiosynthesis of [18F]DHMT was achieved within 77 min with overall isolated RCY of 6.9% ± 2.8% (n = 7, decay-uncorrected and specific activity of 155 ± 153 GBq/µmol (n = 7 at the end of synthesis. This study is the first demonstration of producing 2-[18F]fluoroethyl azide by an automated module, which can be used for a variety of PET tracers through click chemistry. It is also the first time that [18F]DHMT was successfully tested for PET imaging in a healthy beagle dog.

  9. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    Science.gov (United States)

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  10. Analysis and optimization of machining parameters of laser cutting for polypropylene composite

    Science.gov (United States)

    Deepa, A.; Padmanabhan, K.; Kuppan, P.

    2017-11-01

    Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.

  11. Parameter Identification of Static Friction Based on An Optimal Exciting Trajectory

    Science.gov (United States)

    Tu, X.; Zhao, P.; Zhou, Y. F.

    2017-12-01

    In this paper, we focus on how to improve the identification efficiency of friction parameters in a robot joint. First, the static friction model that has only linear dependencies with respect to their parameters is adopted so that the servomotor dynamics can be linearized. In this case, the traditional exciting trajectory based on Fourier series is modified by replacing the constant term with quintic polynomial to ensure the boundary continuity of speed and acceleration. Then, the Fourier-related parameters are optimized by genetic algorithm(GA) in which the condition number of regression matrix is set as the fitness function. At last, compared with the constant-velocity tracking experiment, the friction parameters from the exciting trajectory experiment has the similar result with the advantage of time reduction.

  12. Application of Powell's optimization method to surge arrester circuit models' parameters

    Energy Technology Data Exchange (ETDEWEB)

    Christodoulou, C.A.; Stathopulos, I.A. [National Technical University of Athens, School of Electrical and Computer Engineering, 9 Iroon Politechniou St., Zografou Campus, 157 80 Athens (Greece); Vita, V.; Ekonomou, L.; Chatzarakis, G.E. [A.S.PE.T.E. - School of Pedagogical and Technological Education, Department of Electrical Engineering Educators, N. Heraklion, 141 21 Athens (Greece)

    2010-08-15

    Powell's optimization method has been used for the evaluation of the surge arrester models parameters. The proper modelling of metal-oxide surge arresters and the right selection of equivalent circuit parameters are very significant issues, since quality and reliability of lightning performance studies can be improved with the more efficient representation of the arresters' dynamic behavior. The proposed approach selects optimum arrester model equivalent circuit parameter values, minimizing the error between the simulated peak residual voltage value and this given by the manufacturer. Application of the method in performed on a 120 kV metal oxide arrester. The use of the obtained optimum parameter values reduces significantly the relative error between the simulated and manufacturer's peak residual voltage value, presenting the effectiveness of the method. (author)

  13. Optimal routing in an automated storage/retrieval system with dedicated storage

    NARCIS (Netherlands)

    Berg, van den J.P.; Gademann, A.J.R.M.

    1999-01-01

    We address the sequencing of requests in an automated storage/retrieval system with dedicated storage. We consider the block sequencing approach, where a set of storage and retrieval requests is given beforehand and no new requests come in during operation. The objective for this static problem is

  14. Guiding automated NMR structure determination using a global optimization metric, the NMR DP score

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yuanpeng Janet, E-mail: yphuang@cabm.rutgers.edu; Mao, Binchen; Xu, Fei; Montelione, Gaetano T., E-mail: gtm@rutgers.edu [Rutgers, The State University of New Jersey, Department of Molecular Biology and Biochemistry, Center for Advanced Biotechnology and Medicine, and Northeast Structural Genomics Consortium (United States)

    2015-08-15

    ASDP is an automated NMR NOE assignment program. It uses a distinct bottom-up topology-constrained network anchoring approach for NOE interpretation, with 2D, 3D and/or 4D NOESY peak lists and resonance assignments as input, and generates unambiguous NOE constraints for iterative structure calculations. ASDP is designed to function interactively with various structure determination programs that use distance restraints to generate molecular models. In the CASD–NMR project, ASDP was tested and further developed using blinded NMR data, including resonance assignments, either raw or manually-curated (refined) NOESY peak list data, and in some cases {sup 15}N–{sup 1}H residual dipolar coupling data. In these blinded tests, in which the reference structure was not available until after structures were generated, the fully-automated ASDP program performed very well on all targets using both the raw and refined NOESY peak list data. Improvements of ASDP relative to its predecessor program for automated NOESY peak assignments, AutoStructure, were driven by challenges provided by these CASD–NMR data. These algorithmic improvements include (1) using a global metric of structural accuracy, the discriminating power score, for guiding model selection during the iterative NOE interpretation process, and (2) identifying incorrect NOESY cross peak assignments caused by errors in the NMR resonance assignment list. These improvements provide a more robust automated NOESY analysis program, ASDP, with the unique capability of being utilized with alternative structure generation and refinement programs including CYANA, CNS, and/or Rosetta.

  15. Automated Soil Physical Parameter Assessment Using Smartphone and Digital Camera Imagery

    Directory of Open Access Journals (Sweden)

    Matt Aitkenhead

    2016-12-01

    Full Text Available Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera to estimate the structure, texture and drainage of the soil. The method is adapted from earlier work on developing smartphone apps for estimating topsoil organic matter content in Scotland and uses an existing visual soil structure assessment approach. Colour and image texture information was extracted from the imagery. This information was linked, using geolocation information derived from the smartphone GPS system or from field notes, with existing collections of topography, land cover, soil and climate data for Scotland. A neural network model was developed that was capable of estimating soil structure (on a five-point scale, soil texture (sand, silt, clay, bulk density, pH and drainage category using this information. The model is sufficiently accurate to provide estimates of these parameters from soils in the field. We discuss potential improvements to the approach and plans to integrate the model into a set of smartphone apps for estimating health and fertility indicators for Scottish soils.

  16. Automated criterion-based analysis for Cole parameters assessment from cerebral neonatal electrical bioimpedance spectroscopy measurements

    International Nuclear Information System (INIS)

    Seoane, F; Lindecrantz, Kaj; Ward, L C; Lingwood, B E

    2012-01-01

    Hypothermia has been proven as an effective rescue therapy for infants with moderate or severe neonatal hypoxic ischemic encephalopathy. Hypoxia-ischemia alters the electrical impedance characteristics of the brain in neonates; therefore, spectroscopic analysis of the cerebral bioimpedance of the neonate may be useful for the detection of candidate neonates eligible for hypothermia treatment. Currently, in addition to the lack of reference bioimpedance data obtained from healthy neonates, there is no standardized approach established for bioimpedance spectroscopy data analysis. In this work, cerebral bioimpedance measurements (12 h postpartum) in a cross-section of 84 term and near-term healthy neonates were performed at the bedside in the post-natal ward. To characterize the impedance spectra, Cole parameters (R 0 , R ∞ , f C and α) were extracted from the obtained measurements using an analysis process based on a best measurement and highest likelihood selection process. The results obtained in this study complement previously reported work and provide a standardized criterion-based method for data analysis. The availability of electrical bioimpedance spectroscopy reference data and the automatic criterion-based analysis method might support the development of a non-invasive method for prompt selection of neonates eligible for cerebral hypothermic rescue therapy. (paper)

  17. GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.

    Science.gov (United States)

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-10-24

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such

  18. A novel optimization approach to estimating kinetic parameters of the enzymatic hydrolysis of corn stover

    Directory of Open Access Journals (Sweden)

    Fenglei Qi

    2016-01-01

    Full Text Available Enzymatic hydrolysis is an integral step in the conversion of lignocellulosic biomass to ethanol. The conversion of cellulose to fermentable sugars in the presence of inhibitors is a complex kinetic problem. In this study, we describe a novel approach to estimating the kinetic parameters underlying this process. This study employs experimental data measuring substrate and enzyme loadings, sugar and acid inhibitions for the production of glucose. Multiple objectives to minimize the difference between model predictions and experimental observations are developed and optimized by adopting multi-objective particle swarm optimization method. Model reliability is assessed by exploring likelihood profile in each parameter space. Compared to previous studies, this approach improved the prediction of sugar yields by reducing the mean squared errors by 34% for glucose and 2.7% for cellobiose, suggesting improved agreement between model predictions and the experimental data. Furthermore, kinetic parameters such as K2IG2, K1IG, K2IG, K1IA, and K3IA are identified as contributors to the model non-identifiability and wide parameter confidence intervals. Model reliability analysis indicates possible ways to reduce model non-identifiability and tighten parameter confidence intervals. These results could help improve the design of lignocellulosic biorefineries by providing higher fidelity predictions of fermentable sugars under inhibitory conditions.

  19. Optimization of operating parameters in polysilicon chemical vapor deposition reactor with response surface methodology

    Science.gov (United States)

    An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen

    2018-05-01

    In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.

  20. Optimizing parameters of a technical system using quality function deployment method

    Science.gov (United States)

    Baczkowicz, M.; Gwiazda, A.

    2015-11-01

    The article shows the practical use of Quality Function Deployment (QFD) on the example of a mechanized mining support. Firstly it gives a short description of this method and shows how the designing process, from the constructor point of view, looks like. The proposed method allows optimizing construction parameters and comparing them as well as adapting to customer requirements. QFD helps to determine the full set of crucial construction parameters and then their importance and difficulty of their execution. Secondly it shows chosen technical system and presents its construction with figures of the existing and future optimized model. The construction parameters were selected from the designer point of view. The method helps to specify a complete set of construction parameters, from the point of view, of the designed technical system and customer requirements. The QFD matrix can be adjusted depending on designing needs and not every part of it has to be considered. Designers can choose which parts are the most important. Due to this QFD can be a very flexible tool. The most important is to define relationships occurring between parameters and that part cannot be eliminated from the analysis.

  1. Optimization of Squeeze Casting Parameters for 2017 A Wrought Al Alloy Using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Najib Souissi

    2014-04-01

    Full Text Available This study applies the Taguchi method to investigate the relationship between the ultimate tensile strength, hardness and process variables in a squeeze casting 2017 A wrought aluminium alloy. The effects of various casting parameters including squeeze pressure, melt temperature and die temperature were studied. Therefore, the objectives of the Taguchi method for the squeeze casting process are to establish the optimal combination of process parameters and to reduce the variation in quality between only a few experiments. The experimental results show that the squeeze pressure significantly affects the microstructure and the mechanical properties of 2017 A Al alloy.

  2. Stepwise optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He

  3. Optimization of Cutting Parameters on Delamination of Drilling Glass-Polyester Composites

    Directory of Open Access Journals (Sweden)

    Majid Habeeb Faidh-Allah

    2018-02-01

    Full Text Available This paper attempted to study the effect of cutting parameters (spindle speed and feed rate on delamination phenomena during the drilling glass-polyester composites. Drilling process was done by CNC machine with 10 mm diameter of high-speed steel (HSS drill bit. Taguchi technique with L16 orthogonal layout was used to analyze the effective parameters on delamination factor. The optimal experiment was no. 13 with spindle speed 1273 rpm and feed 0.05 mm/rev with minimum delamination factor 1.28.

  4. Saturne II synchroton injector parameters operation and control: computerization and optimization

    International Nuclear Information System (INIS)

    Lagniel, J.M.

    1983-01-01

    The injector control system has been studied, aiming at the beam quality improvement, the increasing of the versatility, and a better machine availability. It has been choosen to realize the three following functions: - acquisition of the principal parameters of the process, so as to control them quickly and to be warned if one of them is wrong (monitoring); - the control of those parameters, one by one or by families (starting, operating point); - the research of an optimal control (on a model or on the process itself) [fr

  5. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    Science.gov (United States)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  6. Optimal relations of the parameters ensuring safety during reactor start-up

    International Nuclear Information System (INIS)

    Yurkevich, G.P.

    2004-01-01

    Procedure and equations for the determination of optimal ratio between parameters allowing safe removal of reactor in critical state are suggested. Initial pulse frequency of pulsed start-up channel and power of neutron source are decreased by reduced rate of changing reactivity during automatic start-up, disposition of pulsed neutron detector in the range with neutron flux density to 5·10 12 s -1 cm -2 at standard power, separate signal of period for the use in chains of automatic start-up and emergency protection, reduction of pulses frequency of the start-up channel (the frequency is equal to 4000 c -1 ). Procedure and equations for the determination of optimal parameters are effected with the account of statistic character of pulsed detector frequency and false outlet signal [ru

  7. Parameter optimization for transitions between memory states in small arrays of Josephson junctions

    Energy Technology Data Exchange (ETDEWEB)

    Rezac, Jacob D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division; Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computing and Computational Sciences Directorate; Univ. of Delaware, Newark, DE (United States). Dept. of Mathematical Sciences; Imam, Neena [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computing and Computational Sciences Directorate; Braiman, Yehuda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division; Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computing and Computational Sciences Directorate; ; Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mechanical, Aerospace, and Biomedical Engineering

    2017-01-11

    Coupled arrays of Josephson junctions possess multiple stable zero voltage states. Such states can store information and consequently can be utilized for cryogenic memory applications. Basic memory operations can be implemented by sending a pulse to one of the junctions and studying transitions between the states. In order to be suitable for memory operations, such transitions between the states have to be fast and energy efficient. Here in this article we employed simulated annealing, a stochastic optimization algorithm, to study parameter optimization of array parameters which minimizes times and energies of transitions between specifically chosen states that can be utilized for memory operations (Read, Write, and Reset). Simulation results show that such transitions occur with access times on the order of 10–100 ps and access energies on the order of 10-19–5×10-18 J. Numerical simulations are validated with approximate analytical results.

  8. Physiochemical parameters optimization for enhanced nisin production by Lactococcus lactis (MTCC 440

    Directory of Open Access Journals (Sweden)

    Puspadhwaja Mall

    2010-02-01

    Full Text Available The influence of various physiochemical parameters on the growth of Lactococcus lactis sub sp. lactis MTCC 440 was studied at shake flask level for 20 h. Media optimization (MRS broth was studied to achieve enhanced growth of the organism and also nisin production. Bioassay of nisin was done with agar diffusion method using Streptococcus agalactae NCIM 2401 as indicator strain. MRS broth (6%, w/v with 0.15μg/ml of nisin supplemented with 0.5% (v/v skimmed milk was found to be the best for nisin production as well as for growth of L lactis. The production of nisin was strongly influenced by the presence of skimmed milk and nisin in MRS broth. The production of nisin was affected by the physical parameters and maximum nisin production was at 30(0C while the optimal temperature for biomass production was 37(0C.

  9. Multi-criteria optimization of chassis parameters of Nissan 200 SX for drifting competitions

    Science.gov (United States)

    Maniowski, M.

    2016-09-01

    The work objective is to increase performance of Nissan 200sx S13 prepared for a quasi-static state of drifting on a circular path with given constant radius (R=15 m) and tyre-road friction coefficient (μ = 0.9). First, a high fidelity “miMA” multibody model of the vehicle is formulated. Then, a multicriteria optimization problem is solved with one of the goals to maximize a stable drift angle (β) of the vehicle. The decision variables contain 11 parameters of the vehicle chassis (describing the wheel suspension stiffness and geometry) and 2 parameters responsible for a driver steering and accelerator actions, that control this extreme closed-loop manoeuvre. The optimized chassis setup results in the drift angle increase by 14% from 35 to 40 deg.

  10. A novel membrane-based process to isolate peroxidase from horseradish roots: optimization of operating parameters.

    Science.gov (United States)

    Liu, Jianguo; Yang, Bo; Chen, Changzhen

    2013-02-01

    The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.

  11. Parameter Optimization for Enhancement of Ethanol Yield by Atmospheric Pressure DBD-Treated Saccharomyces cerevisiae

    International Nuclear Information System (INIS)

    Dong Xiaoyu; Yuan Yulian; Tang Qian; Dou Shaohua; Di Lanbo; Zhang Xiuling

    2014-01-01

    In this study, Saccharomyces cerevisiae (S. cerevisiae) was exposed to dielectric barrier discharge plasma (DBD) to improve its ethanol production capacity during fermentation. Response surface methodology (RSM) was used to optimize the discharge-associated parameters of DBD for the purpose of maximizing the ethanol yield achieved by DBD-treated S. cerevisiae. According to single factor experiments, a mathematical model was established using Box-Behnken central composite experiment design, with plasma exposure time, power supply voltage, and exposed-sample volume as impact factors and ethanol yield as the response. This was followed by response surface analysis. Optimal experimental parameters for plasma discharge-induced enhancement in ethanol yield were plasma exposure time of 1 min, power voltage of 26 V, and an exposed sample volume of 9 mL. Under these conditions, the resulting yield of ethanol was 0.48 g/g, representing an increase of 33% over control. (plasma technology)

  12. Optimization of the parameters of power sources excited by β-radiation

    Energy Technology Data Exchange (ETDEWEB)

    Bulyarskiy, S. V., E-mail: bulyar2954@mail.ru; Lakalin, A. V. [Russian Academy of Sciences, Institute of Nanotechnology of Microelectronics (Russian Federation); Abanin, I. E.; Amelichev, V. V. [Technological Center (Russian Federation); Svetuhin, V. V. [Ulyanovsk State University (Russian Federation)

    2017-01-15

    The experimental results and calculations of the efficiency of the energy conversion of Ni-63 β-radiation sources to electricity using silicon p–i–n diodes are presented. All calculations are performed taking into account the energy distribution of β-electrons. An expression for the converter open-circuit voltage is derived taking into account the distribution of high-energy electrons in the space-charge region of the p–i–n diode. Ways of optimizing the converter parameters by improving the technology of diodes and optimizing the emitter active layer and i-region thicknesses of the semiconductor converter are shown. The distribution of the conversion losses to the source and radiation detector and the losses to high-energy electron entry into the semiconductor is calculated. Experimental values of the conversion efficiency of 0.4–0.7% are in good agreement with the calculated parameters.

  13. Parameters estimation online for Lorenz system by a novel quantum-behaved particle swarm optimization

    International Nuclear Information System (INIS)

    Gao Fei; Tong Hengqing; Li Zhuoqiu

    2008-01-01

    This paper proposes a novel quantum-behaved particle swarm optimization (NQPSO) for the estimation of chaos' unknown parameters by transforming them into nonlinear functions' optimization. By means of the techniques in the following three aspects: contracting the searching space self-adaptively; boundaries restriction strategy; substituting the particles' convex combination for their centre of mass, this paper achieves a quite effective search mechanism with fine equilibrium between exploitation and exploration. Details of applying the proposed method and other methods into Lorenz systems are given, and experiments done show that NQPSO has better adaptability, dependability and robustness. It is a successful approach in unknown parameter estimation online especially in the cases with white noises

  14. Performance Evaluation and Parameter Optimization of SoftCast Wireless Video Broadcast

    Directory of Open Access Journals (Sweden)

    Dongxue Yang

    2015-08-01

    Full Text Available Wireless video broadcast plays an imp ortant role in multimedia communication with the emergence of mobile video applications. However, conventional video broadcast designs suffer from a cliff effect due to separated source and channel encoding. The newly prop osed SoftCast scheme employs a cross-layer design, whose reconstructed video quality is prop ortional to the channel condition. In this pap er, we provide the p erformance evaluation and the parameter optimization of the SoftCast system. Optimization principles on parameter selection are suggested to obtain a b etter video quality, o ccupy less bandwidth and/or utilize lower complexity. In addition, we compare SoftCast with H.264 in the LTE EPA scenario. The simulation results show that SoftCast provides a b etter p erformance in the scalability to channel conditions and the robustness to packet losses.

  15. Slot Parameter Optimization for Multiband Antenna Performance Improvement Using Intelligent Systems

    Directory of Open Access Journals (Sweden)

    Erdem Demircioglu

    2015-01-01

    Full Text Available This paper discusses bandwidth enhancement for multiband microstrip patch antennas (MMPAs using symmetrical rectangular/square slots etched on the patch and the substrate properties. The slot parameters on MMPA are modeled using soft computing technique of artificial neural networks (ANN. To achieve the best ANN performance, Particle Swarm Optimization (PSO and Differential Evolution (DE are applied with ANN’s conventional training algorithm in optimization of the modeling performance. In this study, the slot parameters are assumed as slot distance to the radiating patch edge, slot width, and length. Bandwidth enhancement is applied to a formerly designed MMPA fed by a microstrip transmission line attached to the center pin of 50 ohm SMA connecter. The simulated antennas are fabricated and measured. Measurement results are utilized for training the artificial intelligence models. The ANN provides 98% model accuracy for rectangular slots and 97% for square slots; however, ANFIS offer 90% accuracy with lack of resonance frequency tracking.

  16. Production of sintered alumina from powder; optimization of the sinterized parameters for the maximum mechanical resistence

    International Nuclear Information System (INIS)

    Rocha, J.C. da.

    1981-02-01

    Pure, sinterized alumina and the optimization of the parameters of sinterization in order to obtain the highest mechanical resistence are discussed. Test materials are sinterized from a fine powder of pure alumina (Al 2 O 3 ), α phase, at different temperatures and times, in air. The microstructures are analysed concerning porosity and grain size. Depending on the temperature or the time of sinterization, there is a maximum for the mechanical resistence. (A.R.H.) [pt

  17. Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model

    Science.gov (United States)

    Keum, H.; Han, K.; Kim, H.; Ha, C.

    2017-12-01

    The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various

  18. Characterization of PV panel and global optimization of its model parameters using genetic algorithm

    International Nuclear Information System (INIS)

    Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.

    2013-01-01

    Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules

  19. SVM classification model in depression recognition based on mutation PSO parameter optimization

    Directory of Open Access Journals (Sweden)

    Zhang Ming

    2017-01-01

    Full Text Available At present, the clinical diagnosis of depression is mainly through structured interviews by psychiatrists, which is lack of objective diagnostic methods, so it causes the higher rate of misdiagnosis. In this paper, a method of depression recognition based on SVM and particle swarm optimization algorithm mutation is proposed. To address on the problem that particle swarm optimization (PSO algorithm easily trap in local optima, we propose a feedback mutation PSO algorithm (FBPSO to balance the local search and global exploration ability, so that the parameters of the classification model is optimal. We compared different PSO mutation algorithms about classification accuracy for depression, and found the classification accuracy of support vector machine (SVM classifier based on feedback mutation PSO algorithm is the highest. Our study promotes important reference value for establishing auxiliary diagnostic used in depression recognition of clinical diagnosis.

  20. Multi-objective optimization problems concepts and self-adaptive parameters with mathematical and engineering applications

    CERN Document Server

    Lobato, Fran Sérgio

    2017-01-01

    This book is aimed at undergraduate and graduate students in applied mathematics or computer science, as a tool for solving real-world design problems. The present work covers fundamentals in multi-objective optimization and applications in mathematical and engineering system design using a new optimization strategy, namely the Self-Adaptive Multi-objective Optimization Differential Evolution (SA-MODE) algorithm. This strategy is proposed in order to reduce the number of evaluations of the objective function through dynamic update of canonical Differential Evolution parameters (population size, crossover probability and perturbation rate). The methodology is applied to solve mathematical functions considering test cases from the literature and various engineering systems design, such as cantilevered beam design, biochemical reactor, crystallization process, machine tool spindle design, rotary dryer design, among others.