WorldWideScience

Sample records for automated parameter optimization

  1. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Vickie E.; Borreguero, Jose M. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Bhowmik, Debsindhu [Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Ganesh, Panchapakesan; Sumpter, Bobby G. [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Proffen, Thomas E. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Goswami, Monojoy, E-mail: goswamim@ornl.gov [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States)

    2017-07-01

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parameters which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.

  2. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  3. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    Science.gov (United States)

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce

  4. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized

  5. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  6. Automated Planning of Tangential Breast Intensity-Modulated Radiotherapy Using Heuristic Optimization

    International Nuclear Information System (INIS)

    Purdie, Thomas G.; Dinniwell, Robert E.; Letourneau, Daniel; Hill, Christine; Sharpe, Michael B.

    2011-01-01

    Purpose: To present an automated technique for two-field tangential breast intensity-modulated radiotherapy (IMRT) treatment planning. Method and Materials: A total of 158 planned patients with Stage 0, I, and II breast cancer treated using whole-breast IMRT were retrospectively replanned using automated treatment planning tools. The tools developed are integrated into the existing clinical treatment planning system (Pinnacle 3 ) and are designed to perform the manual volume delineation, beam placement, and IMRT treatment planning steps carried out by the treatment planning radiation therapist. The automated algorithm, using only the radio-opaque markers placed at CT simulation as inputs, optimizes the tangential beam parameters to geometrically minimize the amount of lung and heart treated while covering the whole-breast volume. The IMRT parameters are optimized according to the automatically delineated whole-breast volume. Results: The mean time to generate a complete treatment plan was 6 min, 50 s ± 1 min 12 s. For the automated plans, 157 of 158 plans (99%) were deemed clinically acceptable, and 138 of 158 plans (87%) were deemed clinically improved or equal to the corresponding clinical plan when reviewed in a randomized, double-blinded study by one experienced breast radiation oncologist. In addition, overall the automated plans were dosimetrically equivalent to the clinical plans when scored for target coverage and lung and heart doses. Conclusion: We have developed robust and efficient automated tools for fully inversed planned tangential breast IMRT planning that can be readily integrated into clinical practice. The tools produce clinically acceptable plans using only the common anatomic landmarks from the CT simulation process as an input. We anticipate the tools will improve patient access to high-quality IMRT treatment by simplifying the planning process and will reduce the effort and cost of incorporating more advanced planning into clinical practice.

  7. Automated parameter tuning applied to sea ice in a global climate model

    Science.gov (United States)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2018-01-01

    This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  8. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    International Nuclear Information System (INIS)

    Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang

    2009-01-01

    Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way

  9. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    Directory of Open Access Journals (Sweden)

    Wenz Frederik

    2009-09-01

    Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The

  10. Optimization of Robotic Spray Painting process Parameters using Taguchi Method

    Science.gov (United States)

    Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar

    2018-02-01

    Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.

  11. SAE2.py: a python script to automate parameter studies using SCREAMER with application to magnetic switching on Z

    International Nuclear Information System (INIS)

    Orndorff-Plunkett, Franklin

    2011-01-01

    The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.

  12. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  13. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad; Canini, Marco

    2017-01-01

    for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing

  14. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  15. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  16. Optimization of an NLEO-based algorithm for automated detection of spontaneous activity transients in early preterm EEG

    International Nuclear Information System (INIS)

    Palmu, Kirsi; Vanhatalo, Sampsa; Stevenson, Nathan; Wikström, Sverre; Hellström-Westas, Lena; Palva, J Matias

    2010-01-01

    We propose here a simple algorithm for automated detection of spontaneous activity transients (SATs) in early preterm electroencephalography (EEG). The parameters of the algorithm were optimized by supervised learning using a gold standard created from visual classification data obtained from three human raters. The generalization performance of the algorithm was estimated by leave-one-out cross-validation. The mean sensitivity of the optimized algorithm was 97% (range 91–100%) and specificity 95% (76–100%). The optimized algorithm makes it possible to systematically study brain state fluctuations of preterm infants. (note)

  17. [Research and Design of a System for Detecting Automated External Defbrillator Performance Parameters].

    Science.gov (United States)

    Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai

    2017-09-30

    In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.

  18. Basic MR sequence parameters systematically bias automated brain volume estimation

    International Nuclear Information System (INIS)

    Haller, Sven; Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte; Meuli, Reto; Thiran, Jean-Philippe; Krueger, Gunnar; Lovblad, Karl-Olof; Kober, Tobias

    2016-01-01

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  19. Basic MR sequence parameters systematically bias automated brain volume estimation

    Energy Technology Data Exchange (ETDEWEB)

    Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)

    2016-11-15

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  20. Parameter identification using optimization techniques in the continuous simulation programs FORSIM and MACKSIM

    International Nuclear Information System (INIS)

    Carver, M.B.; Austin, C.F.; Ross, N.E.

    1980-02-01

    This report discusses the mechanics of automated parameter identification in simulation packages, and reviews available integration and optimization algorithms and their interaction within the recently developed optimization options in the FORSIM and MACKSIM simulation packages. In the MACKSIM mass-action chemical kinetics simulation package, the form and structure of the ordinary differential equations involved is known, so the implementation of an optimizing option is relatively straightforward. FORSIM, however, is designed to integrate ordinary and partial differential equations of abritrary definition. As the form of the equations is not known in advance, the design of the optimizing option is more intricate, but the philosophy could be applied to most simulation packages. In either case, however, the invocation of the optimizing interface is simple and user-oriented. Full details for the use of the optimizing mode for each program are given; specific applications are used as examples. (O.T.)

  1. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  2. Automated Portfolio Optimization Based on a New Test for Structural Breaks

    Directory of Open Access Journals (Sweden)

    Tobias Berens

    2014-04-01

    Full Text Available We present a completely automated optimization strategy which combines the classical Markowitz mean-variance portfolio theory with a recently proposed test for structural breaks in covariance matrices. With respect to equity portfolios, global minimum-variance optimizations, which base solely on the covariance matrix, yield considerable results in previous studies. However, financial assets cannot be assumed to have a constant covariance matrix over longer periods of time. Hence, we estimate the covariance matrix of the assets by respecting potential change points. The resulting approach resolves the issue of determining a sample for parameter estimation. Moreover, we investigate if this approach is also appropriate for timing the reoptimizations. Finally, we apply the approach to two datasets and compare the results to relevant benchmark techniques by means of an out-of-sample study. It is shown that the new approach outperforms equally weighted portfolios and plain minimum-variance portfolios on average.

  3. Optimization of process parameters in precipitation for consistent quality UO2 powder production

    International Nuclear Information System (INIS)

    Tiwari, S.K.; Reddy, A.L.V.; Venkataswamy, J.; Misra, M.; Setty, D.S.; Sheela, S.; Saibaba, N.

    2013-01-01

    Nuclear reactor grade natural uranium dioxide powder is being produced through precipitation route, which is further processed before converting into sintered pellets used in the fabrication of PHWR fuel assemblies of 220 and 540 MWe type reactors. The process of precipitating Uranyl Nitrate Pure Solution (UNPS) is an important step in the UO 2 powder production line, where in soluble uranium is transformed into solid form of Ammonium Uranate (AU), which in turn reflects and decides the powder characteristics. Precipitation of UNPS with vapour ammonia is being carried out in semi batch process and process parameters like ammonia flow rate, temperature, concentration of UNPS and free acidity of UNPS are very critical and decides the UO 2 powder quality. Variation in these critical parameters influences powder characteristics, which in turn influences the sinterability of UO 2 powder. In order to get consistent powder quality and sinterability the critical parameter like ammonia flow rate during precipitation is studied, optimized and validated. The critical process parameters are controlled through PLC based automated on-line data acquisition systems for achieving consistent powder quality with increased recovery and production. The present paper covers optimization of process parameters and powder characteristics. (author)

  4. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  5. An optimized routing algorithm for the automated assembly of standard multimode ribbon fibers in a full-mesh optical backplane

    Science.gov (United States)

    Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene

    2018-03-01

    In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.

  6. VISUALIZATION SOFTWARE DEVELOPMENT FOR PROCEDURE OF MULTI-DIMENSIONAL OPTIMIZATION OF TECHNOLOGICAL PROCESS FUNCTIONAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    E. N. Ishakova

    2016-05-01

    Full Text Available A method for multi-criteria optimization of the design parameters for technological object is described. The existing optimization methods are overviewed, and works in the field of basic research and applied problems are analyzed. The problem is formulated, based on the process requirements, making it possible to choose the geometrical dimensions of machine tips and the flow rate of the process, so that the resulting technical and economical parameters were optimal. In the problem formulation application of the performance method adapted to a particular domain is described. Task implementation is shown; the method of characteristics creation for the studied object in view of some restrictions for parameters in both analytical and graphical representation. On the basis of theoretical research the software system is developed that gives the possibility to automate the discovery of optimal solutions for specific problems. Using available information sources, that characterize the object of study, it is possible to establish identifiers, add restrictions from the one side, and in the interval as well. Obtained result is a visual depiction of dependence of the main study parameters on the others, which may have an impact on both the flow of the process, and the quality of products. The resulting optimal area shows the use of different design options for technological object in an acceptable kinematic range that makes it possible for the researcher to choose the best design solution.

  7. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavio...

  8. Towards automatic parameter tuning of stream processing systems

    KAUST Repository

    Bilal, Muhammad

    2017-09-27

    Optimizing the performance of big-data streaming applications has become a daunting and time-consuming task: parameters may be tuned from a space of hundreds or even thousands of possible configurations. In this paper, we present a framework for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing three benchmark applications in Apache Storm. Our results show that a hill-climbing algorithm that uses a new heuristic sampling approach based on Latin Hypercube provides the best results. Our gray-box algorithm provides comparable results while being two to five times faster.

  9. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  10. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    Science.gov (United States)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  11. Control parameter optimization for AP1000 reactor using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Wang, Pengfei; Wan, Jiashuang; Luo, Run; Zhao, Fuyu; Wei, Xinyu

    2016-01-01

    Highlights: • The PSO algorithm is applied for control parameter optimization of AP1000 reactor. • Key parameters of the MSHIM control system are optimized. • Optimization results are evaluated though simulations and quantitative analysis. - Abstract: The advanced mechanical shim (MSHIM) core control strategy is implemented in the AP1000 reactor for core reactivity and axial power distribution control simultaneously. The MSHIM core control system can provide superior reactor control capabilities via automatic rod control only. This enables the AP1000 to perform power change operations automatically without the soluble boron concentration adjustments. In this paper, the Particle Swarm Optimization (PSO) algorithm has been applied for the parameter optimization of the MSHIM control system to acquire better reactor control performance for AP1000. System requirements such as power control performance, control bank movement and AO control constraints are reflected in the objective function. Dynamic simulations are performed based on an AP1000 reactor simulation platform in each iteration of the optimization process to calculate the fitness values of particles in the swarm. The simulation platform is developed in Matlab/Simulink environment with implementation of a nodal core model and the MSHIM control strategy. Based on the simulation platform, the typical 10% step load decrease transient from 100% to 90% full power is simulated and the objective function used for control parameter tuning is directly incorporated in the simulation results. With successful implementation of the PSO algorithm in the control parameter optimization of AP1000 reactor, four key parameters of the MSHIM control system are optimized. It has been demonstrated by the calculation results that the optimized MSHIM control system parameters can improve the reactor power control capability and reduce the control rod movement without compromising AO control. Therefore, the PSO based optimization

  12. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    Science.gov (United States)

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  13. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    Directory of Open Access Journals (Sweden)

    Christian Held

    2013-01-01

    Full Text Available Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline′s modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  14. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    Science.gov (United States)

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  15. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  16. On the use of PGD for optimal control applied to automated fibre placement

    Science.gov (United States)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  17. Optimization of process parameters in precipitation for consistent quality UO{sub 2} powder production

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, S.K.; Reddy, A.L.V.; Venkataswamy, J.; Misra, M.; Setty, D.S.; Sheela, S.; Saibaba, N., E-mail: misra@nfc.gov.in [Nuclear Fuel Complex, Hyderabad (India)

    2013-07-01

    Nuclear reactor grade natural uranium dioxide powder is being produced through precipitation route, which is further processed before converting into sintered pellets used in the fabrication of PHWR fuel assemblies of 220 and 540 MWe type reactors. The process of precipitating Uranyl Nitrate Pure Solution (UNPS) is an important step in the UO{sub 2} powder production line, where in soluble uranium is transformed into solid form of Ammonium Uranate (AU), which in turn reflects and decides the powder characteristics. Precipitation of UNPS with vapour ammonia is being carried out in semi batch process and process parameters like ammonia flow rate, temperature, concentration of UNPS and free acidity of UNPS are very critical and decides the UO{sub 2} powder quality. Variation in these critical parameters influences powder characteristics, which in turn influences the sinterability of UO{sub 2} powder. In order to get consistent powder quality and sinterability the critical parameter like ammonia flow rate during precipitation is studied, optimized and validated. The critical process parameters are controlled through PLC based automated on-line data acquisition systems for achieving consistent powder quality with increased recovery and production. The present paper covers optimization of process parameters and powder characteristics. (author)

  18. Optimization of VPSC Model Parameters for Two-Phase Titanium Alloys: Flow Stress Vs Orientation Distribution Function Metrics

    Science.gov (United States)

    Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.

    2018-06-01

    The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.

  19. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    Science.gov (United States)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  20. FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis.

    Directory of Open Access Journals (Sweden)

    Alex D Herbert

    Full Text Available Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.

  1. Simulation-Based Optimization for Storage Allocation Problem of Outbound Containers in Automated Container Terminals

    Directory of Open Access Journals (Sweden)

    Ning Zhao

    2015-01-01

    Full Text Available Storage allocation of outbound containers is a key factor of the performance of container handling system in automated container terminals. Improper storage plans of outbound containers make QC waiting inevitable; hence, the vessel handling time will be lengthened. A simulation-based optimization method is proposed in this paper for the storage allocation problem of outbound containers in automated container terminals (SAPOBA. A simulation model is built up by Timed-Colored-Petri-Net (TCPN, used to evaluate the QC waiting time of storage plans. Two optimization approaches, based on Particle Swarm Optimization (PSO and Genetic Algorithm (GA, are proposed to form the complete simulation-based optimization method. Effectiveness of this method is verified by experiment, as the comparison of the two optimization approaches.

  2. Optimization of Parameters of Asymptotically Stable Systems

    Directory of Open Access Journals (Sweden)

    Anna Guerman

    2011-01-01

    Full Text Available This work deals with numerical methods of parameter optimization for asymptotically stable systems. We formulate a special mathematical programming problem that allows us to determine optimal parameters of a stabilizer. This problem involves solutions to a differential equation. We show how to chose the mesh in order to obtain discrete problem guaranteeing the necessary accuracy. The developed methodology is illustrated by an example concerning optimization of parameters for a satellite stabilization system.

  3. Designing a fully automated multi-bioreactor plant for fast DoE optimization of pharmaceutical protein production.

    Science.gov (United States)

    Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner

    2013-06-01

    The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Computer simulation and automation of data processing

    International Nuclear Information System (INIS)

    Tikhonov, A.N.

    1981-01-01

    The principles of computerized simulation and automation of data processing are presented. The automized processing system is constructed according to the module-hierarchical principle. The main operating conditions of the system are as follows: preprocessing, installation analysis, interpretation, accuracy analysis and controlling parameters. The definition of the quasireal experiment permitting to plan the real experiment is given. It is pointed out that realization of the quasireal experiment by means of the computerized installation model with subsequent automized processing permits to scan the quantitative aspect of the system as a whole as well as provides optimal designing of installation parameters for obtaining maximum resolution [ru

  5. Genetic Algorithm Optimizes Q-LAW Control Parameters

    Science.gov (United States)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  6. Infrared Drying Parameter Optimization

    Science.gov (United States)

    Jackson, Matthew R.

    In recent years, much research has been done to explore direct printing methods, such as screen and inkjet printing, as alternatives to the traditional lithographic process. The primary motivation is reduction of the material costs associated with producing common electronic devices. Much of this research has focused on developing inkjet or screen paste formulations that can be printed on a variety of substrates, and which have similar conductivity performance to the materials currently used in the manufacturing of circuit boards and other electronic devices. Very little research has been done to develop a process that would use direct printing methods to manufacture electronic devices in high volumes. This study focuses on developing and optimizing a drying process for conductive copper ink in a high volume manufacturing setting. Using an infrared (IR) dryer, it was determined that conductive copper prints could be dried in seconds or minutes as opposed to tens of minutes or hours that it would take with other drying devices, such as a vacuum oven. In addition, this study also identifies significant parameters that can affect the conductivity of IR dried prints. Using designed experiments and statistical analysis; the dryer parameters were optimized to produce the best conductivity performance for a specific ink formulation and substrate combination. It was determined that for an ethylene glycol, butanol, 1-methoxy 2- propanol ink formulation printed on Kapton, the optimal drying parameters consisted of a dryer height of 4 inches, a temperature setting between 190 - 200°C, and a dry time of 50-65 seconds depending on the printed film thickness as determined by the number of print passes. It is important to note that these parameters are optimized specifically for the ink formulation and substrate used in this study. There is still much research that needs to be done into optimizing the IR dryer for different ink substrate combinations, as well as developing a

  7. Parameters Investigation of Mathematical Model of Productivity for Automated Line with Availability by DMAIC Methodology

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-01-01

    Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.

  8. PARAMETER COORDINATION AND ROBUST OPTIMIZATION FOR MULTIDISCIPLINARY DESIGN

    Institute of Scientific and Technical Information of China (English)

    HU Jie; PENG Yinghong; XIONG Guangleng

    2006-01-01

    A new parameter coordination and robust optimization approach for multidisciplinary design is presented. Firstly, the constraints network model is established to support engineering change, coordination and optimization. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. Secondly, the parameter coordination method is presented to solve the constraints network model, monitor the potential conflicts due to engineering changes, and obtain the consistency solution space corresponding to the given product specifications. Finally, the robust parameter optimization model is established, and genetic arithmetic is used to obtain the robust optimization parameter. An example of bogie design is analyzed to show the scheme to be effective.

  9. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  10. Automated gamma knife radiosurgery treatment planning with image registration, data-mining, and Nelder-Mead simplex optimization

    International Nuclear Information System (INIS)

    Lee, Kuan J.; Barber, David C.; Walton, Lee

    2006-01-01

    Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes ( 30 000 cubic mm,>20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically

  11. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  12. Optimal Laser Phototherapy Parameters for Pain Relief.

    Science.gov (United States)

    Kate, Rohit J; Rubatt, Sarah; Enwemeka, Chukuka S; Huddleston, Wendy E

    2018-03-27

    Studies on laser phototherapy for pain relief have used parameters that vary widely and have reported varying outcomes. The purpose of this study was to determine the optimal parameter ranges of laser phototherapy for pain relief by analyzing data aggregated from existing primary literature. Original studies were gathered from available sources and were screened to meet the pre-established inclusion criteria. The included articles were then subjected to meta-analysis using Cohen's d statistic for determining treatment effect size. From these studies, ranges of the reported parameters that always resulted into large effect sizes were determined. These optimal ranges were evaluated for their accuracy using leave-one-article-out cross-validation procedure. A total of 96 articles met the inclusion criteria for meta-analysis and yielded 232 effect sizes. The average effect size was highly significant: d = +1.36 (confidence interval [95% CI] = 1.04-1.68). Among all the parameters, total energy was found to have the greatest effect on pain relief and had the most prominent optimal ranges of 120-162 and 15.36-20.16 J, which always resulted in large effect sizes. The cross-validation accuracy of the optimal ranges for total energy was 68.57% (95% CI = 53.19-83.97). Fewer and less-prominent optimal ranges were obtained for the energy density and duration parameters. None of the remaining parameters was found to be independently related to pain relief outcomes. The findings of meta-analysis indicate that laser phototherapy is highly effective for pain relief. Based on the analysis of parameters, total energy can be optimized to yield the largest effect on pain relief.

  13. Parameters control in GAs for dynamic optimization

    Directory of Open Access Journals (Sweden)

    Khalid Jebari

    2013-02-01

    Full Text Available The Control of Genetic Algorithms parameters allows to optimize the search process and improves the performance of the algorithm. Moreover it releases the user to dive into a game process of trial and failure to find the optimal parameters.

  14. Controller Design Automation for Aeroservoelastic Design Optimization of Wind Turbines

    NARCIS (Netherlands)

    Ashuri, T.; Van Bussel, G.J.W.; Zaayer, M.B.; Van Kuik, G.A.M.

    2010-01-01

    The purpose of this paper is to integrate the controller design of wind turbines with structure and aerodynamic analysis and use the final product in the design optimization process (DOP) of wind turbines. To do that, the controller design is automated and integrated with an aeroelastic simulation

  15. Automated design and optimization of flexible booster autopilots via linear programming, volume 1

    Science.gov (United States)

    Hauser, F. D.

    1972-01-01

    A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.

  16. GA based CNC turning center exploitation process parameters optimization

    Directory of Open Access Journals (Sweden)

    Z. Car

    2009-01-01

    Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.

  17. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  18. Integrated optimization of location assignment and sequencing in multi-shuttle automated storage and retrieval systems under modified 2n-command cycle pattern

    Science.gov (United States)

    Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin

    2017-09-01

    This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.

  19. Simple heuristics: A bridge between manual core design and automated optimization methods

    International Nuclear Information System (INIS)

    White, J.R.; Delmolino, P.M.

    1993-01-01

    The primary function of RESCUE is to serve as an aid in the analysis and identification of feasible loading patterns for LWR reload cores. The unique feature of RESCUE is that its physics model is based on some recent advances in generalized perturbation theory (GPT) methods. The high order GPT techniques offer the accuracy, computational efficiency, and flexibility needed for the implementation of a full range of capabilities within a set of compatible interactive (manual and semi-automated) and automated design tools. The basic design philosophy and current features within RESCUE are reviewed, and the new semi-automated capability is highlighted. The online advisor facility appears quite promising and it provides a natural bridge between the traditional trial-and-error manual process and the recent progress towards fully automated optimization sequences. (orig.)

  20. Optimal parameters of the SVM for temperature prediction

    Directory of Open Access Journals (Sweden)

    X. Shi

    2015-05-01

    Full Text Available This paper established three different optimization models in order to predict the Foping station temperature value. The dimension was reduced to change multivariate climate factors into a few variables by principal component analysis (PCA. And the parameters of support vector machine (SVM were optimized with genetic algorithm (GA, particle swarm optimization (PSO and developed genetic algorithm. The most suitable method was applied for parameter optimization by comparing the results of three different models. The results are as follows: The developed genetic algorithm optimization parameters of the predicted values were closest to the measured value after the analog trend, and it is the most fitting measured value trends, and its homing speed is relatively fast.

  1. Chickpea seeds germination rational parameters optimization

    Science.gov (United States)

    Safonova, Yu A.; Ivliev, M. N.; Lemeshkin, A. V.

    2018-05-01

    The paper presents the influence of chickpea seeds bioactivation parameters on their enzymatic activity experimental results. Optimal bioactivation process modes were obtained by regression-factor analysis: process temperature - 13.6 °C, process duration - 71.5 h. It was found that in the germination process, the proteolytic, amylolytic and lipolytic enzymes activity increased, and the urease enzyme activity is reduced. The dependences of enzyme activity on chickpea seeds germination conditions were obtained by mathematical processing of experimental data. The calculated data are in good agreement with the experimental ones. This confirms the optimization efficiency based on experiments mathematical planning in order to determine the enzymatic activity of chickpea seeds germination optimal parameters of bioactivated seeds.

  2. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    Science.gov (United States)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  3. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.

  4. Optimization of Agrobacterium -mediated transformation parameters ...

    African Journals Online (AJOL)

    Agrobacterium-mediated transformation factors for sweet potato embryogenic calli were optimized using -glucuronidase (GUS) as a reporter. The binary vector pTCK303 harboring the modified GUS gene driven by the CaMV 35S promoter was used. Transformation parameters were optimized including bacterial ...

  5. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    Science.gov (United States)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  6. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  7. Design of an optimal automation system : Finding a balance between a human's task engagement and exhaustion

    NARCIS (Netherlands)

    Klein, Michel; van Lambalgen, Rianne

    2011-01-01

    In demanding tasks, human performance can seriously degrade as a consequence of increased workload and limited resources. In such tasks it is very important to maintain an optimal performance quality, therefore automation assistance is required. On the other hand, automation can also impose

  8. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  9. Optimal Machining Parameters for Achieving the Desired Surface Roughness in Turning of Steel

    Directory of Open Access Journals (Sweden)

    LB Abhang

    2012-06-01

    Full Text Available Due to the widespread use of highly automated machine tools in the metal cutting industry, manufacturing requires highly reliable models and methods for the prediction of output performance in the machining process. The prediction of optimal manufacturing conditions for good surface finish and dimensional accuracy plays a very important role in process planning. In the steel turning process the tool geometry and cutting conditions determine the time and cost of production which ultimately affect the quality of the final product. In the present work, experimental investigations have been conducted to determine the effect of the tool geometry (effective tool nose radius and metal cutting conditions (cutting speed, feed rate and depth of cut on surface finish during the turning of EN-31 steel. First and second order mathematical models are developed in terms of machining parameters by using the response surface methodology on the basis of the experimental results. The surface roughness prediction model has been optimized to obtain the surface roughness values by using LINGO solver programs. LINGO is a mathematical modeling language which is used in linear and nonlinear optimization to formulate large problems concisely, solve them, and analyze the solution in engineering sciences, operation research etc. The LINGO solver program is global optimization software. It gives minimum values of surface roughness and their respective optimal conditions.

  10. Automated Prescription of Oblique Brain 3D MRSI

    Science.gov (United States)

    Ozhinsky, Eugene; Vigneron, Daniel B.; Chang, Susan M.; Nelson, Sarah J.

    2012-01-01

    Two major difficulties encountered in implementing Magnetic Resonance Spectroscopic Imaging (MRSI) in a clinical setting are limited coverage and difficulty in prescription. The goal of this project was to completely automate the process of 3D PRESS MRSI prescription, including placement of the selection box, saturation bands and shim volume, while maximizing the coverage of the brain. The automated prescription technique included acquisition of an anatomical MRI image, optimization of the oblique selection box parameters, optimization of the placement of OVS saturation bands, and loading of the calculated parameters into a customized 3D MRSI pulse sequence. To validate the technique and compare its performance with existing protocols, 3D MRSI data were acquired from 6 exams from 3 healthy volunteers. To assess the performance of the automated 3D MRSI prescription for patients with brain tumors, the data were collected from 16 exams from 8 subjects with gliomas. This technique demonstrated robust coverage of the tumor, high consistency of prescription and very good data quality within the T2 lesion. PMID:22692829

  11. Hybrid computer optimization of systems with random parameters

    Science.gov (United States)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  12. Optimization of parameters of special asynchronous electric drives

    Science.gov (United States)

    Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.

    2018-03-01

    The article considers the solution of the problem of parameters optimization of special asynchronous electric drives. The solution of the problem will allow one to project and create special asynchronous electric drives for various industries. The created types of electric drives will have optimum mass-dimensional and power parameters. It will allow one to realize and fulfill the set characteristics of management of technological processes with optimum level of expenses of electric energy, time of completing the process or other set parameters. The received decision allows one not only to solve a certain optimizing problem, but also to construct dependences between the optimized parameters of special asynchronous electric drives, for example, with the change of power, current in a winding of the stator or rotor, induction in a gap or steel of magnetic conductors and other parameters. On the constructed dependences, it is possible to choose necessary optimum values of parameters of special asynchronous electric drives and their components without carrying out repeated calculations.

  13. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    Science.gov (United States)

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  14. Optimal Control of Connected and Automated Vehicles at Roundabouts

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Liuhui [University of Delaware; Malikopoulos, Andreas [ORNL; Rios-Torres, Jackeline [ORNL

    2018-01-01

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulation and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.

  15. Automated dual-wavelength spectrophotometer optimized for phytochrome assay

    International Nuclear Information System (INIS)

    Pratt, L.H.; Wampler, J.E.; Rich, E.S. Jr.

    1985-01-01

    A microcomputer-controlled dual-wavelength spectrophotometer suitable for automated phytochrome assay is described. The optomechanical unit provides for sequential irradiation of the sample by the two measuring wavelengths with intervening dark intervals and for actinic irradiation to interconvert phytochrome between its two forms. Photomultiplier current is amplified, converted to a digital value and transferred into the computer using a custom-designed IEEE-488 bus interface. The microcomputer calculates mathematically both absorbance and absorbance difference values with dynamic correction for photomultiplier dark current. In addition, the computer controls the operating parameters of the spectrophotometer via a separate interface. These parameters include control of the durations of measuring and actinic irradiation intervals and their sequence. 14 references, 4 figures

  16. Towards automated diffraction tomography. Part II-Cell parameter determination

    International Nuclear Information System (INIS)

    Kolb, U.; Gorelik, T.; Otten, M.T.

    2008-01-01

    Automated diffraction tomography (ADT) allows the collection of three-dimensional (3d) diffraction data sets from crystals down to a size of only few nanometres. Imaging is done in STEM mode, and diffraction data are collected with quasi-parallel beam nanoelectron diffraction (NED). Here, we present a set of developed processing steps necessary for automatic unit-cell parameter determination from the collected 3d diffraction data. Cell parameter determination is done via extraction of peak positions from a recorded data set (called the data reduction path) followed by subsequent cluster analysis of difference vectors. The procedure of lattice parameter determination is presented in detail for a beam-sensitive organic material. Independently, we demonstrate a potential (called the full integration path) based on 3d reconstruction of the reciprocal space visualising special structural features of materials such as partial disorder. Furthermore, we describe new features implemented into the acquisition part

  17. Automated electrochemical assembly of the protected potential TMG-chitotriomycin precursor based on rational optimization of the carbohydrate building block.

    Science.gov (United States)

    Nokami, Toshiki; Isoda, Yuta; Sasaki, Norihiko; Takaiso, Aki; Hayase, Shuichi; Itoh, Toshiyuki; Hayashi, Ryutaro; Shimizu, Akihiro; Yoshida, Jun-ichi

    2015-03-20

    The anomeric arylthio group and the hydroxyl-protecting groups of thioglycosides were optimized to construct carbohydrate building blocks for automated electrochemical solution-phase synthesis of oligoglucosamines having 1,4-β-glycosidic linkages. The optimization study included density functional theory calculations, measurements of the oxidation potentials, and the trial synthesis of the chitotriose trisaccharide. The automated synthesis of the protected potential N,N,N-trimethyl-d-glucosaminylchitotriomycin precursor was accomplished by using the optimized building block.

  18. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  19. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton–Vargas two-dimensional snowplow model

    International Nuclear Information System (INIS)

    Auluck, S K H

    2014-01-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton–Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance. (paper)

  20. Geometry Based Design Automation : Applied to Aircraft Modelling and Optimization

    OpenAIRE

    Amadori, Kristian

    2012-01-01

    Product development processes are continuously challenged by demands for increased efficiency. As engineering products become more and more complex, efficient tools and methods for integrated and automated design are needed throughout the development process. Multidisciplinary Design Optimization (MDO) is one promising technique that has the potential to drastically improve concurrent design. MDO frameworks combine several disciplinary models with the aim of gaining a holistic perspective of ...

  1. An efficient automated parameter tuning framework for spiking neural networks.

    Science.gov (United States)

    Carlson, Kristofor D; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L

    2014-01-01

    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.

  2. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  3. Multi-parameter optimization design of parabolic trough solar receiver

    International Nuclear Information System (INIS)

    Guo, Jiangfeng; Huai, Xiulan

    2016-01-01

    Highlights: • The optimal condition can be obtained by multi-parameter optimization. • Exergy and thermal efficiencies are employed as objective function. • Exergy efficiency increases at the expense of heat losses. • The heat obtained by working fluid increases as thermal efficiency grows. - Abstract: The design parameters of parabolic trough solar receiver are interrelated and interact with one another, so the optimal performance of solar receiver cannot be obtained by the convectional single-parameter optimization. To overcome the shortcoming of single-parameter optimization, a multi-parameter optimization of parabolic trough solar receiver is employed based on genetic algorithm in the present work. When the thermal efficiency is taken as the objective function, the heat obtained by working fluid increases while the average temperature of working fluid and wall temperatures of solar receiver decrease. The average temperature of working fluid and the wall temperatures of solar receiver increase while the heat obtained by working fluid decreases generally by taking the exergy efficiency as an objective function. Assuming that the solar radiation intensity remains constant, the exergy obtained by working fluid increases by taking exergy efficiency as the objective function, which comes at the expense of heat losses of solar receiver.

  4. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  5. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  6. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-01-01

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  7. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Taishan Medical University, Taian, Shandong (China); Washington University in St Louis, St Louis, MO (United States); Li, H. Harlod; Zhang, T; Yang, D [Washington University in St Louis, St Louis, MO (United States); Ma, F [Taishan Medical University, Taian, Shandong (China)

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  8. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  9. Optimization of parameters of heat exchangers vehicles

    Directory of Open Access Journals (Sweden)

    Andrei MELEKHIN

    2014-09-01

    Full Text Available The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  10. Optimizing wireless LAN for longwall coal mine automation

    Energy Technology Data Exchange (ETDEWEB)

    Hargrave, C.O.; Ralston, J.C.; Hainsworth, D.W. [Exploration & Mining Commonwealth Science & Industrial Research Organisation, Pullenvale, Qld. (Australia)

    2007-01-15

    A significant development in underground longwall coal mining automation has been achieved with the successful implementation of wireless LAN (WLAN) technology for communication on a longwall shearer. WIreless-FIdelity (Wi-Fi) was selected to meet the bandwidth requirements of the underground data network, and several configurations were installed on operating longwalls to evaluate their performance. Although these efforts demonstrated the feasibility of using WLAN technology in longwall operation, it was clear that new research and development was required in order to establish optimal full-face coverage. By undertaking an accurate characterization of the target environment, it has been possible to achieve great improvements in WLAN performance over a nominal Wi-Fi installation. This paper discusses the impact of Fresnel zone obstructions and multipath effects on radio frequency propagation and reports an optimal antenna and system configuration. Many of the lessons learned in the longwall case are immediately applicable to other underground mining operations, particularly wherever there is a high degree of obstruction from mining equipment.

  11. Reduced order modeling and parameter identification of a building energy system model through an optimization routine

    International Nuclear Information System (INIS)

    Harish, V.S.K.V.; Kumar, Arun

    2016-01-01

    Highlights: • A BES model based on 1st principles is developed and solved numerically. • Parameters of lumped capacitance model are fitted using the proposed optimization routine. • Validations are showed for different types of building construction elements. • Step response excitations for outdoor air temperature and relative humidity are analyzed. - Abstract: Different control techniques together with intelligent building technology (Building Automation Systems) are used to improve energy efficiency of buildings. In almost all control projects, it is crucial to have building energy models with high computational efficiency in order to design and tune the controllers and simulate their performance. In this paper, a set of partial differential equations are formulated accounting for energy flow within the building space. These equations are then solved as conventional finite difference equations using Crank–Nicholson scheme. Such a model of a higher order is regarded as a benchmark model. An optimization algorithm has been developed, depicted through a flowchart, which minimizes the sum squared error between the step responses of the numerical and the optimal model. Optimal model of the construction element is nothing but a RC-network model with the values of Rs and Cs estimated using the non-linear time invariant constrained optimization routine. The model is validated with comparing the step responses with other two RC-network models whose parameter values are selected based on a certain criteria. Validations are showed for different types of building construction elements viz., low, medium and heavy thermal capacity elements. Simulation results show that the optimal model closely follow the step responses of the numerical model as compared to the responses of other two models.

  12. Multivariate optimization of ILC parameters

    International Nuclear Information System (INIS)

    Bazarov, I.V.; Padamsee, H.S.

    2005-01-01

    We present results of multiobjective optimization of the International Linear Collider (ILC) which seeks to maximize luminosity at each given total cost of the linac (capital and operating costs of cryomodules, refrigeration and RF). Evolutionary algorithms allow quick exploration of optimal sets of parameters in a complicated system such as ILC in the presence of realistic constraints as well as investigation of various what-if scenarios in potential performance. Among the parameters we varied there were accelerating gradient and Q of the cavities (in a coupled manner following a realistic Q vs. E curve), the number of particles per bunch, the bunch length, number of bunches in the train, etc. We find an optimum which decreases (relative to TESLA TDR baseline) the total linac cost by 22%, capital cost by 25% at the same luminosity of 3 x 10 38 m -2 s -1 . For this optimum the gradient is 35 MV/m, the final spot size is 3.6 nm, and the beam power is 15.9 MV/m. Changing the luminosity by 10 38 m -2 s -1 results in 10% change in the total linac cost and 4% in the capital cost. We have also explored the optimal fronts of luminosity vs. cost for several other scenarios using the same approach. (orig.)

  13. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  14. Optimization Design of Multi-Parameters in Rail Launcher System

    Directory of Open Access Journals (Sweden)

    Yujiao Zhang

    2014-05-01

    Full Text Available Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different parameters respectively. And the orthogonal test approach is used to guide the simulation for choosing the better parameter combinations, as well reduce the calculation cost. The research shows that the proposed method gives a better result in the energy efficiency of the system. To improve the energy conversion efficiency of electromagnetic rail launchers, the selection of more parameters must be considered in the design stage, such as the width, height and length of rail, the distance between rail pair, and pulse forming inductance. However, the relationship between these parameters and energy conversion efficiency cannot be directly described by one mathematical expression. So optimization methods must be applied to conduct design. In this paper, a rail launcher with five parameters was optimized by using orthogonal test method. According to the arrangement of orthogonal table, the better parameters’ combination can be obtained through less calculation. Under the condition of different parameters’ value, field and circuit simulation analysis were made. The results show that the energy conversion efficiency of the system is increased by 71.9 % after parameters optimization.

  15. A procedure for multi-objective optimization of tire design parameters

    OpenAIRE

    Nikola Korunović; Miloš Madić; Miroslav Trajanović; Miroslav Radovanović

    2015-01-01

    The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zo...

  16. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    Science.gov (United States)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  17. Automated Design and Optimization of Pebble-bed Reactor Cores

    International Nuclear Information System (INIS)

    Gougar, Hans D.; Ougouag, Abderrafi M.; Terry, William K.

    2010-01-01

    We present a conceptual design approach for high-temperature gas-cooled reactors using recirculating pebble-bed cores. The design approach employs PEBBED, a reactor physics code specifically designed to solve for and analyze the asymptotic burnup state of pebble-bed reactors, in conjunction with a genetic algorithm to obtain a core that maximizes a fitness value that is a function of user-specified parameters. The uniqueness of the asymptotic core state and the small number of independent parameters that define it suggest that core geometry and fuel cycle can be efficiently optimized toward a specified objective. PEBBED exploits a novel representation of the distribution of pebbles that enables efficient coupling of the burnup and neutron diffusion solvers. With this method, even complex pebble recirculation schemes can be expressed in terms of a few parameters that are amenable to modern optimization techniques. With PEBBED, the user chooses the type and range of core physics parameters that represent the design space. A set of traits, each with acceptable and preferred values expressed by a simple fitness function, is used to evaluate the candidate reactor cores. The stochastic search algorithm automatically drives the generation of core parameters toward the optimal core as defined by the user. The optimized design can then be modeled and analyzed in greater detail using higher resolution and more computationally demanding tools to confirm the desired characteristics. For this study, the design of pebble-bed high temperature reactor concepts subjected to demanding physical constraints demonstrated the efficacy of the PEBBED algorithm.

  18. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  19. Beyond bixels: Generalizing the optimization parameters for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Markman, Jerry; Low, Daniel A.; Beavis, Andrew W.; Deasy, Joseph O.

    2002-01-01

    Intensity modulated radiation therapy (IMRT) treatment planning systems optimize fluence distributions by subdividing the fluence distribution into rectangular bixels. The algorithms typically optimize the fluence intensity directly, often leading to fluence distributions with sharp discontinuities. These discontinuities may yield difficulties in delivery of the fluence distribution, leading to inaccurate dose delivery. We have developed a method for decoupling the bixel intensities from the optimization parameters; either by introducing optimization control points from which the bixel intensities are interpolated or by parametrizing the fluence distribution using basis functions. In either case, the number of optimization search parameters is reduced from the direct bixel optimization method. To illustrate the concept, the technique is applied to two-dimensional idealized head and neck treatment plans. The interpolation algorithms investigated were nearest-neighbor, linear and cubic spline, and radial basis functions serve as the basis function test. The interpolation and basis function optimization techniques were compared against the direct bixel calculation. The number of optimization parameters were significantly reduced relative to the bixel optimization, and this was evident in the reduction of computation time of as much as 58% from the full bixel optimization. The dose distributions obtained using the reduced optimization parameter sets were very similar to the full bixel optimization when examined by dose distributions, statistics, and dose-volume histograms. To evaluate the sensitivity of the fluence calculations to spatial misalignment caused either by delivery errors or patient motion, the doses were recomputed with a 1 mm shift in each beam and compared to the unshifted distributions. Except for the nearest-neighbor algorithm, the reduced optimization parameter dose distributions were generally less sensitive to spatial shifts than the bixel

  20. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  1. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  2. Automated prescription of oblique brain 3D magnetic resonance spectroscopic imaging.

    Science.gov (United States)

    Ozhinsky, Eugene; Vigneron, Daniel B; Chang, Susan M; Nelson, Sarah J

    2013-04-01

    Two major difficulties encountered in implementing Magnetic Resonance Spectroscopic Imaging (MRSI) in a clinical setting are limited coverage and difficulty in prescription. The goal of this project was to automate completely the process of 3D PRESS MRSI prescription, including placement of the selection box, saturation bands and shim volume, while maximizing the coverage of the brain. The automated prescription technique included acquisition of an anatomical MRI image, optimization of the oblique selection box parameters, optimization of the placement of outer-volume suppression saturation bands, and loading of the calculated parameters into a customized 3D MRSI pulse sequence. To validate the technique and compare its performance with existing protocols, 3D MRSI data were acquired from six exams from three healthy volunteers. To assess the performance of the automated 3D MRSI prescription for patients with brain tumors, the data were collected from 16 exams from 8 subjects with gliomas. This technique demonstrated robust coverage of the tumor, high consistency of prescription and very good data quality within the T2 lesion. Copyright © 2012 Wiley Periodicals, Inc.

  3. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  4. Laboratory automation in clinical bacteriology: what system to choose?

    Science.gov (United States)

    Croxatto, A; Prod'hom, G; Faverjon, F; Rochais, Y; Greub, G

    2016-03-01

    Automation was introduced many years ago in several diagnostic disciplines such as chemistry, haematology and molecular biology. The first laboratory automation system for clinical bacteriology was released in 2006, and it rapidly proved its value by increasing productivity, allowing a continuous increase in sample volumes despite limited budgets and personnel shortages. Today, two major manufacturers, BD Kiestra and Copan, are commercializing partial or complete laboratory automation systems for bacteriology. The laboratory automation systems are rapidly evolving to provide improved hardware and software solutions to optimize laboratory efficiency. However, the complex parameters of the laboratory and automation systems must be considered to determine the best system for each given laboratory. We address several topics on laboratory automation that may help clinical bacteriologists to understand the particularities and operative modalities of the different systems. We present (a) a comparison of the engineering and technical features of the various elements composing the two different automated systems currently available, (b) the system workflows of partial and complete laboratory automation, which define the basis for laboratory reorganization required to optimize system efficiency, (c) the concept of digital imaging and telebacteriology, (d) the connectivity of laboratory automation to the laboratory information system, (e) the general advantages and disadvantages as well as the expected impacts provided by laboratory automation and (f) the laboratory data required to conduct a workflow assessment to determine the best configuration of an automated system for the laboratory activities and specificities. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  6. Automated extraction and validation of children's gait parameters with the Kinect.

    Science.gov (United States)

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  7. Integral Optimization of Systematic Parameters of Flip-Flow Screens

    Institute of Scientific and Technical Information of China (English)

    翟宏新

    2004-01-01

    The synthetic index Ks for evaluating flip-flow screens is proposed and systematically optimized in view of the whole system. A series of optimized values of relevant parameters are found and then compared with those of the current industrial specifications. The results show that the optimized value Ks approaches the one of those famous flip-flow screens in the world. Some new findings on geometric and kinematics parameters are useful for improving the flip-flow screens with a low Ks value, which is helpful in developing clean coal technology.

  8. Towards full automation of accelerators through computer control

    CERN Document Server

    Gamble, J; Kemp, D; Keyser, R; Koutchouk, Jean-Pierre; Martucci, P P; Tausch, Lothar A; Vos, L

    1980-01-01

    The computer control system of the Intersecting Storage Rings (ISR) at CERN has always laid emphasis on two particular operational aspects, the first being the reproducibility of machine conditions and the second that of giving the operators the possibility to work in terms of machine parameters such as the tune. Already certain phases of the operation are optimized by the control system, whilst others are automated with a minimum of manual intervention. The authors describe this present control system with emphasis on the existing automated facilities and the features of the control system which make it possible. It then discusses the steps needed to completely automate the operational procedure of accelerators. (7 refs).

  9. Towards full automation of accelerators through computer control

    International Nuclear Information System (INIS)

    Gamble, J.; Hemery, J.-Y.; Kemp, D.; Keyser, R.; Koutchouk, J.-P.; Martucci, P.; Tausch, L.; Vos, L.

    1980-01-01

    The computer control system of the Intersecting Storage Rings (ISR) at CERN has always laid emphasis on two particular operational aspects, the first being the reproducibility of machine conditions and the second that of giving the operators the possibility to work in terms of machine parameters such as the tune. Already certain phases of the operation are optimized by the control system, whilst others are automated with a minimum of manual intervention. The paper describes this present control system with emphasis on the existing automated facilities and the features of the control system which make it possible. It then discusses the steps needed to completely automate the operational procedure of accelerators. (Auth.)

  10. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    OpenAIRE

    Zhang, Xiangsheng; Pan, Feng

    2015-01-01

    Aimed at the parameters optimization in support vector machine (SVM) for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO) algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effective...

  11. EVALUATION OF ANAEMIA USING RED CELL AND RETICULOCYTE PARAMETERS USING AUTOMATED HAEMATOLOGY ANALYSER

    Directory of Open Access Journals (Sweden)

    Vidyadhar Rao

    2016-06-01

    Full Text Available Use of current models of Automated Haematology Analysers help in calculating the haemoglobin contents of the mature Red cells, Reticulocytes and percentages of Microcytic and hypochromic Red cells. This has helped the clinician in reaching early diagnosis and management of Different haemopoietic disorders like Iron Deficiency Anaemia, Thalassaemia and anaemia of chronic diseases. AIM This study is conducted using an Automated Haematology Analyser to evaluate anaemia using the Red Cell and Reticulocyte parameters. Three types of anaemia were evaluated; iron deficiency anaemia, anaemia of long duration and anaemia associated with chronic disease and Iron deficiency. MATERIALS AND METHODS The blood samples were collected from 287 adult patients with anaemia differentiated depending upon their iron status, haemoglobinopathies and inflammatory activity. Iron deficiency anaemia (n=132, anaemia of long duration (ACD, (n=97 and anaemia associated with chronic disease with iron deficiency (ACD Combi, (n=58. Microcytic Red cells, hypochromic red cells percentage and levels of haemoglobin in reticulocytes and matured RBCs were calculated. The accuracy of the parameters was analysed using receiver operating characteristic analyser to differentiate between the types of anaemia. OBSERVATIONS AND RESULTS There was no difference in parameters between the iron deficiency group or anaemia associated with chronic disease and iron deficiency. The hypochromic red cells percentage was the best parameter in differentiating anaemia of chronic disease with or without absolute iron deficiency with a sensitivity of 72.7% and a specificity of 70.4%. CONCLUSIONS The parameters of red cells and reticulocytes were of reasonably good indicators in differentiating the absolute iron deficiency anaemia with chronic disease.

  12. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  13. SU-G-TeP4-08: Automating the Verification of Patient Treatment Parameters

    Energy Technology Data Exchange (ETDEWEB)

    DiCostanzo, D; Ayan, A; Woollard, J; Gupta, N [The Ohio State University, Columbus, OH (United States)

    2016-06-15

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate via a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.

  14. Optimization of hydraulic turbine governor parameters based on WPA

    Science.gov (United States)

    Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao

    2018-01-01

    The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.

  15. RF Gun Optimization Study

    International Nuclear Information System (INIS)

    Alicia Hofler; Pavel Evtushenko

    2007-01-01

    Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation

  16. Optimization of Loudspeaker Part Design Parameters by Air Viscosity Damping Effect

    OpenAIRE

    Yue Hu; Xilu Zhao; Takao Yamaguchi; Manabu Sasajima; Yoshio Koike; Akira Hara

    2016-01-01

    This study optimized the design parameters of a cone loudspeaker as an example of high flexibility of the product design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to optimize each parameter of the loudspeaker design. To overcome the limitation of the design problem in practice, this study presents an acoustic analysis algorithm to optimize the design parameters of the loudspeaker. Th...

  17. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  18. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    Science.gov (United States)

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region

  19. Quantitative assessment of Aluminium cast Alloys` structural parameters to optimize ITS properties

    Directory of Open Access Journals (Sweden)

    L. Kuchariková

    2017-01-01

    Full Text Available The present work deals with evaluation of eutectic Si (its shape, size, and distribution, dendrite cell size and dendrite arm spacing in aluminium cast alloys which were cast into different moulds (sand and metallic. Structural parameters were evaluated using NIS-Elements image analyser software. This software is imaging analysis software for the evaluation, capture, archiving and automated measurement of structural parameters. The control of structural parameters by NIS Elements shows that optimum mechanical properties of aluminium cast alloys strongly depend on the distribution, morphology, size of eute ctic Si and matrix parameters.

  20. Network optimization including gas lift and network parameters under subsurface uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Baffoe, J.; Pajonk, O. [SPT Group GmbH, Hamburg (Germany); Badalov, H.; Huseynov, S. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE; Trick, M. [SPT Group, Calgary, AB (Canada)

    2013-08-01

    Optimization of oil and gas field production systems poses a great challenge to field development due to complex and multiple interactions between various operational design parameters and subsurface uncertainties. Conventional analytical methods are capable of finding local optima based on single deterministic models. They are less applicable for efficiently generating alternative design scenarios in a multi-objective context. Practical implementations of robust optimization workflows integrate the evaluation of alternative design scenarios and multiple realizations of subsurface uncertainty descriptions. Production or economic performance indicators such as NPV (Net Present Value) are linked to a risk-weighted objective function definition to guide the optimization processes. This work focuses on an integrated workflow using a reservoir-network simulator coupled to an optimization framework. The work will investigate the impact of design parameters while considering the physics of the reservoir, wells, and surface facilities. Subsurface uncertainties are described by well parameters such as inflow performance. Experimental design methods are used to investigate parameter sensitivities and interactions. Optimization methods are used to find optimal design parameter combinations which improve key performance indicators of the production network system. The proposed workflow will be applied to a representative oil reservoir coupled to a network which is modelled by an integrated reservoir-network simulator. Gas-lift will be included as an explicit measure to improve production. An objective function will be formulated for the net present value of the integrated system including production revenue and facility costs. Facility and gas lift design parameters are tuned to maximize NPV. Well inflow performance uncertainties are introduced with an impact on gas lift performance. Resulting variances on NPV are identified as a risk measure for the optimized system design. A

  1. Parameters Optimization and Application to Glutamate Fermentation Model Using SVM

    Directory of Open Access Journals (Sweden)

    Xiangsheng Zhang

    2015-01-01

    Full Text Available Aimed at the parameters optimization in support vector machine (SVM for glutamate fermentation modelling, a new method is developed. It optimizes the SVM parameters via an improved particle swarm optimization (IPSO algorithm which has better global searching ability. The algorithm includes detecting and handling the local convergence and exhibits strong ability to avoid being trapped in local minima. The material step of the method was shown. Simulation experiments demonstrate the effectiveness of the proposed algorithm.

  2. Optimization of MIS/IL solar cells parameters using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, K.A.; Mohamed, E.A.; Alaa, S.H. [Faculty of Engineering, Alexandria Univ. (Egypt); Motaz, M.S. [Institute of Graduate Studies and Research, Alexandria Univ. (Egypt)

    2004-07-01

    This paper presents a genetic algorithm optimization for MIS/IL solar cell parameters including doping concentration NA, metal work function {phi}m, oxide thickness d{sub ox}, mobile charge density N{sub m}, fixed oxide charge density N{sub ox} and the external back bias applied to the inversion grid V. The optimization results are compared with theoretical optimization and shows that the genetic algorithm can be used for determining the optimum parameters of the cell. (orig.)

  3. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  4. The optimal number, type and location of devices in automation of electrical distribution networks

    Directory of Open Access Journals (Sweden)

    Popović Željko N.

    2015-01-01

    Full Text Available This paper presents the mixed integer linear programming based model for determining optimal number, type and location of remotely controlled and supervised devices in distribution networks in the presence of distributed generators. The proposed model takes into consideration a number of different devices simultaneously (remotely controlled circuit breakers/reclosers, sectionalizing switches, remotely supervised and local fault passage indicators along with the following: expected outage cost to consumers and producers due to momentary and long-term interruptions, automated device expenses (capital investment, installation, and annual operation and maintenance costs, number and expenses of crews involved in the isolation and restoration process. Furthermore, the other possible benefits of each of automated device are also taken into account (e.g., benefits due to decreasing the cost of switching operations in normal conditions. The obtained numerical results emphasize the importance of consideration of different types of automation devices simultaneously. They also show that the proposed approach have a potential to improve the process of determining of the best automation strategy in real life distribution networks.

  5. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    Directory of Open Access Journals (Sweden)

    Lin Ye

    2014-02-01

    Full Text Available Using the finite element method (FEM and particle swarm optimization (PSO, a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters’ effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°.

  6. Optimal parameters uncoupling vibration modes of oscillators

    Science.gov (United States)

    Le, K. C.; Pieper, A.

    2017-07-01

    This paper proposes a novel optimization concept for an oscillator with two degrees of freedom. By using specially defined motion ratios, we control the action of springs to each degree of freedom of the oscillator. We aim at showing that, if the potential action of the springs in one period of vibration, used as the payoff function for the conservative oscillator, is maximized among all admissible parameters and motions satisfying Lagrange's equations, then the optimal motion ratios uncouple vibration modes. A similar result holds true for the dissipative oscillator having dampers. The application to optimal design of vehicle suspension is discussed.

  7. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    Science.gov (United States)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  8. Optimization of surface roughness parameters in dry turning

    OpenAIRE

    R.A. Mahdavinejad; H. Sharifi Bidgoli

    2009-01-01

    Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view...

  9. Optimization Design of Multi-Parameters in Rail Launcher System

    OpenAIRE

    Yujiao Zhang; Weinan Qin; Junpeng Liao; Jiangjun Ruan

    2014-01-01

    Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different paramete...

  10. Optimization of electrospinning parameters for chitosan nanofibres

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-06-01

    Full Text Available Electrospinning of chitosan, a naturally occurring polysaccharide biopolymer, has been investigated. In this paper, the authors report the optimization of electrospinning process and solution parameters using factorial design approach to obtain...

  11. Plug-and-play monitoring and performance optimization for industrial automation processes

    CERN Document Server

    Luo, Hao

    2017-01-01

    Dr.-Ing. Hao Luo demonstrates the developments of advanced plug-and-play (PnP) process monitoring and control systems for industrial automation processes. With aid of the so-called Youla parameterization, a novel PnP process monitoring and control architecture (PnP-PMCA) with modularized components is proposed. To validate the developments, a case study on an industrial rolling mill benchmark is performed, and the real-time implementation on a laboratory brushless DC motor is presented. Contents PnP Process Monitoring and Control Architecture Real-Time Configuration Techniques for PnP Process Monitoring Real-Time Configuration Techniques for PnP Performance Optimization Benchmark Study and Real-Time Implementation Target Groups Researchers and students of Automation and Control Engineering Practitioners in the area of Industrial and Production Engineering The Author Hao Luo received the Ph.D. degree at the Institute for Automatic Control and Complex Systems (AKS) at the University of Duisburg-Essen, Germany, ...

  12. A Comparative Experimental Study on the Use of Machine Learning Approaches for Automated Valve Monitoring Based on Acoustic Emission Parameters

    Science.gov (United States)

    Ali, Salah M.; Hui, K. H.; Hee, L. M.; Salman Leong, M.; Al-Obaidi, M. A.; Ali, Y. H.; Abdelrhman, Ahmed M.

    2018-03-01

    Acoustic emission (AE) analysis has become a vital tool for initiating the maintenance tasks in many industries. However, the analysis process and interpretation has been found to be highly dependent on the experts. Therefore, an automated monitoring method would be required to reduce the cost and time consumed in the interpretation of AE signal. This paper investigates the application of two of the most common machine learning approaches namely artificial neural network (ANN) and support vector machine (SVM) to automate the diagnosis of valve faults in reciprocating compressor based on AE signal parameters. Since the accuracy is an essential factor in any automated diagnostic system, this paper also provides a comparative study based on predictive performance of ANN and SVM. AE parameters data was acquired from single stage reciprocating air compressor with different operational and valve conditions. ANN and SVM diagnosis models were subsequently devised by combining AE parameters of different conditions. Results demonstrate that ANN and SVM models have the same results in term of prediction accuracy. However, SVM model is recommended to automate diagnose the valve condition in due to the ability of handling a high number of input features with low sampling data sets.

  13. Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms

    Science.gov (United States)

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and

  14. Optimization of exposure parameters in full field digital mammography

    International Nuclear Information System (INIS)

    Williams, Mark B.; Raghunathan, Priya; More, Mitali J.; Seibert, J. Anthony; Kwan, Alexander; Lo, Joseph Y.; Samei, Ehsan; Ranger, Nicole T.; Fajardo, Laurie L.; McGruder, Allen; McGruder, Sandra M.; Maidment, Andrew D. A.; Yaffe, Martin J.; Bloomquist, Aili; Mawdsley, Gordon E.

    2008-01-01

    Optimization of exposure parameters (target, filter, and kVp) in digital mammography necessitates maximization of the image signal-to-noise ratio (SNR), while simultaneously minimizing patient dose. The goal of this study is to compare, for each of the major commercially available full field digital mammography (FFDM) systems, the impact of the selection of technique factors on image SNR and radiation dose for a range of breast thickness and tissue types. This phantom study is an update of a previous investigation and includes measurements on recent versions of two of the FFDM systems discussed in that article, as well as on three FFDM systems not available at that time. The five commercial FFDM systems tested, the Senographe 2000D from GE Healthcare, the Mammomat Novation DR from Siemens, the Selenia from Hologic, the Fischer Senoscan, and Fuji's 5000MA used with a Lorad M-IV mammography unit, are located at five different university test sites. Performance was assessed using all available x-ray target and filter combinations and nine different phantom types (three compressed thicknesses and three tissue composition types). Each phantom type was also imaged using the automatic exposure control (AEC) of each system to identify the exposure parameters used under automated image acquisition. The figure of merit (FOM) used to compare technique factors is the ratio of the square of the image SNR to the mean glandular dose. The results show that, for a given target/filter combination, in general FOM is a slowly changing function of kVp, with stronger dependence on the choice of target/filter combination. In all cases the FOM was a decreasing function of kVp at the top of the available range of kVp settings, indicating that higher tube voltages would produce no further performance improvement. For a given phantom type, the exposure parameter set resulting in the highest FOM value was system specific, depending on both the set of available target/filter combinations, and

  15. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  16. Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO

    Directory of Open Access Journals (Sweden)

    Adel Taieb

    2017-01-01

    Full Text Available This paper introduces a new development for designing a Multi-Input Multi-Output (MIMO Fuzzy Optimal Model Predictive Control (FOMPC using the Adaptive Particle Swarm Optimization (APSO algorithm. The aim of this proposed control, called FOMPC-APSO, is to develop an efficient algorithm that is able to have good performance by guaranteeing a minimal control. This is done by determining the optimal weights of the objective function. Our method is considered an optimization problem based on the APSO algorithm. The MIMO system to be controlled is modeled by a Takagi-Sugeno (TS fuzzy system whose parameters are identified using weighted recursive least squares method. The utility of the proposed controller is demonstrated by applying it to two nonlinear processes, Continuous Stirred Tank Reactor (CSTR and Tank system, where the proposed approach provides better performances compared with other methods.

  17. Real-time parameter optimization based on neural network for smart injection molding

    Science.gov (United States)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  18. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  19. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  20. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio; Knio, Omar

    2014-01-01

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation

  1. Joint optimization of collimator and reconstruction parameters in SPECT imaging for lesion quantification

    International Nuclear Information System (INIS)

    McQuaid, Sarah J; Southekal, Sudeepti; Kijewski, Marie Foley; Moore, Stephen C

    2011-01-01

    Obtaining the best possible task performance using reconstructed SPECT images requires optimization of both the collimator and reconstruction parameters. The goal of this study is to determine how to perform this optimization, namely whether the collimator parameters can be optimized solely from projection data, or whether reconstruction parameters should also be considered. In order to answer this question, and to determine the optimal collimation, a digital phantom representing a human torso with 16 mm diameter hot lesions (activity ratio 8:1) was generated and used to simulate clinical SPECT studies with parallel-hole collimation. Two approaches to optimizing the SPECT system were then compared in a lesion quantification task: sequential optimization, where collimation was optimized on projection data using the Cramer–Rao bound, and joint optimization, which simultaneously optimized collimator and reconstruction parameters. For every condition, quantification performance in reconstructed images was evaluated using the root-mean-squared-error of 400 estimates of lesion activity. Compared to the joint-optimization approach, the sequential-optimization approach favoured a poorer resolution collimator, which, under some conditions, resulted in sub-optimal estimation performance. This implies that inclusion of the reconstruction parameters in the optimization procedure is important in obtaining the best possible task performance; in this study, this was achieved with a collimator resolution similar to that of a general-purpose (LEGP) collimator. This collimator was found to outperform the more commonly used high-resolution (LEHR) collimator, in agreement with other task-based studies, using both quantification and detection tasks.

  2. Parameters optimization for magnetic resonance coupling wireless power transmission.

    Science.gov (United States)

    Li, Changsheng; Zhang, He; Jiang, Xiaohua

    2014-01-01

    Taking maximum power transmission and power stable transmission as research objectives, optimal design for the wireless power transmission system based on magnetic resonance coupling is carried out in this paper. Firstly, based on the mutual coupling model, mathematical expressions of optimal coupling coefficients for the maximum power transmission target are deduced. Whereafter, methods of enhancing power transmission stability based on parameters optimal design are investigated. It is found that the sensitivity of the load power to the transmission parameters can be reduced and the power transmission stability can be enhanced by improving the system resonance frequency or coupling coefficient between the driving/pick-up coil and the transmission/receiving coil. Experiment results are well conformed to the theoretical analysis conclusions.

  3. Q-Learning Multi-Objective Sequential Optimal Sensor Parameter Weights

    Directory of Open Access Journals (Sweden)

    Raquel Cohen

    2016-04-01

    Full Text Available The goal of our solution is to deliver trustworthy decision making analysis tools which evaluate situations and potential impacts of such decisions through acquired information and add efficiency for continuing mission operations and analyst information.We discuss the use of cooperation in modeling and simulation and show quantitative results for design choices to resource allocation. The key contribution of our paper is to combine remote sensing decision making with Nash Equilibrium for sensor parameter weighting optimization. By calculating all Nash Equilibrium possibilities per period, optimization of sensor allocation is achieved for overall higher system efficiency. Our tool provides insight into what are the most important or optimal weights for sensor parameters and can be used to efficiently tune those weights.

  4. Improved Artificial Fish Algorithm for Parameters Optimization of PID Neural Network

    OpenAIRE

    Jing Wang; Yourui Huang

    2013-01-01

    In order to solve problems such as initial weights are difficult to be determined, training results are easy to trap in local minima in optimization process of PID neural network parameters by traditional BP algorithm, this paper proposed a new method based on improved artificial fish algorithm for parameters optimization of PID neural network. This improved artificial fish algorithm uses a composite adaptive artificial fish algorithm based on optimal artificial fish and nearest artificial fi...

  5. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  7. An optimization method for parameters in reactor nuclear physics

    International Nuclear Information System (INIS)

    Jachic, J.

    1982-01-01

    An optimization method for two basic problems of Reactor Physics was developed. The first is the optimization of a plutonium critical mass and the bruding ratio for fast reactors in function of the radial enrichment distribution of the fuel used as control parameter. The second is the maximization of the generation and the plutonium burnup by an optimization of power temporal distribution. (E.G.) [pt

  8. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    International Nuclear Information System (INIS)

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed

  9. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  10. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  11. The Spiral Discovery Network as an Automated General-Purpose Optimization Tool

    Directory of Open Access Journals (Sweden)

    Adam B. Csapo

    2018-01-01

    Full Text Available The Spiral Discovery Method (SDM was originally proposed as a cognitive artifact for dealing with black-box models that are dependent on multiple inputs with nonlinear and/or multiplicative interaction effects. Besides directly helping to identify functional patterns in such systems, SDM also simplifies their control through its characteristic spiral structure. In this paper, a neural network-based formulation of SDM is proposed together with a set of automatic update rules that makes it suitable for both semiautomated and automated forms of optimization. The behavior of the generalized SDM model, referred to as the Spiral Discovery Network (SDN, and its applicability to nondifferentiable nonconvex optimization problems are elucidated through simulation. Based on the simulation, the case is made that its applicability would be worth investigating in all areas where the default approach of gradient-based backpropagation is used today.

  12. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  13. Bulbous Bow Shape Optimization

    OpenAIRE

    Blanchard , Louis; Berrini , Elisa; Duvigneau , Régis; Roux , Yann; Mourrain , Bernard; Jean , Eric

    2013-01-01

    International audience; The aim of this study is to prove the usefulness of a bulbous bow for a fishing vessel, in terms of drag reduction, using an automated shape optimization procedure including hydrodynamic simulations. A bulbous bow is an appendage that is known to reduce the drag, thanks to its influence on the bow wave system. However, the definition of the geometrical parameters of the bulb, such as its length and thickness, is not intuitive, as both parameters are coupled with regard...

  14. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  15. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  16. Optimal construction parameters of electrosprayed trilayer organic photovoltaic devices

    International Nuclear Information System (INIS)

    Shah, S K; Ali, M; Gunnella, R; Abbas, M; Hirsch, L

    2014-01-01

    A detailed investigation of the optimal set of parameters employed in multilayer device fabrication obtained through successive electrospray deposited layers is reported. In this scheme, the donor/acceptor (D/A) bulk heterojunction layer is sandwiched between two thin stacked layers of individual donor and acceptor materials. The stacked layers geometry with optimal thicknesses plays a decisive role in improving operation characteristics. Among the parameters of the multilayer organic photovoltaics device, the D/A concentration ratio, blend thickness and stacking layers thicknesses are optimized. Other parameters, such as thermal annealing and the role of top metal contacts, are also discussed. Internal photon to current efficiency is found to attain a strong response in the 500 nm optical region for the most efficient device architectures. Such an observation indicates a clear interplay between photon harvesting of active layers and transport by ancillary stacking layers, opening up the possibility to engineer both the material fine structure and the device architecture to obtain the best photovoltaic response from a complex organic heterostructure. (paper)

  17. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  18. Automated titration method for use on blended asphalts

    Science.gov (United States)

    Pauli, Adam T [Cheyenne, WY; Robertson, Raymond E [Laramie, WY; Branthaver, Jan F [Chatham, IL; Schabron, John F [Laramie, WY

    2012-08-07

    A system for determining parameters and compatibility of a substance such as an asphalt or other petroleum substance uses titration to highly accurately determine one or more flocculation occurrences and is especially applicable to the determination or use of Heithaus parameters and optimal mixing of various asphalt stocks. In a preferred embodiment, automated titration in an oxygen gas exclusive system and further using spectrophotometric analysis (2-8) of solution turbidity is presented. A reversible titration technique enabling in-situ titration measurement of various solution concentrations is also presented.

  19. Experimental optimization of a direct injection homogeneous charge compression ignition gasoline engine using split injections with fully automated microgenetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Canakci, M. [Kocaeli Univ., Izmit (Turkey); Reitz, R.D. [Wisconsin Univ., Dept. of Mechanical Engineering, Madison, WI (United States)

    2003-03-01

    Homogeneous charge compression ignition (HCCI) is receiving attention as a new low-emission engine concept. Little is known about the optimal operating conditions for this engine operation mode. Combustion under homogeneous, low equivalence ratio conditions results in modest temperature combustion products, containing very low concentrations of NO{sub x} and particulate matter (PM) as well as providing high thermal efficiency. However, this combustion mode can produce higher HC and CO emissions than those of conventional engines. An electronically controlled Caterpillar single-cylinder oil test engine (SCOTE), originally designed for heavy-duty diesel applications, was converted to an HCCI direct injection (DI) gasoline engine. The engine features an electronically controlled low-pressure direct injection gasoline (DI-G) injector with a 60 deg spray angle that is capable of multiple injections. The use of double injection was explored for emission control and the engine was optimized using fully automated experiments and a microgenetic algorithm optimization code. The variables changed during the optimization include the intake air temperature, start of injection timing and the split injection parameters (per cent mass of fuel in each injection, dwell between the pulses). The engine performance and emissions were determined at 700 r/min with a constant fuel flowrate at 10 MPa fuel injection pressure. The results show that significant emissions reductions are possible with the use of optimal injection strategies. (Author)

  20. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    Science.gov (United States)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  1. Improving the automated optimization of profile extrusion dies by applying appropriate optimization areas and strategies

    Science.gov (United States)

    Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.

    2014-05-01

    The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.

  2. Complicated problem solution techniques in optimal parameter searching

    International Nuclear Information System (INIS)

    Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.

    1992-01-01

    An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs

  3. Investigation and validation of optimal cutting parameters for least ...

    African Journals Online (AJOL)

    The cutting parameters were analyzed and optimized using Box Behnken procedure in the DESIGN EXPERT environment. The effect of process parameters with the output variable were predicted which indicates that the highest cutting speed has significant role in producing least surface roughness followed by feed and ...

  4. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    Science.gov (United States)

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  5. Structural parameter optimization design for Halbach permanent maglev rail

    International Nuclear Information System (INIS)

    Guo, F.; Tang, Y.; Ren, L.; Li, J.

    2010-01-01

    Maglev rail is an important part of the magnetic levitation launch system. Reducing the manufacturing cost of magnetic levitation rail is the key problem for the development of magnetic levitation launch system. The Halbach permanent array has an advantage that the fundamental spatial field is cancelled on one side of the array while the field on the other side is enhanced. So this array used in the design of high temperature superconducting permanent maglev rail could improve the surface magnetic field and the levitation force. In order to make the best use of Nd-Fe-B (NdFeB) material and reduce the cost of maglev rail, the effect of the rail's structural parameters on levitation force and the utilization rate of NdFeB material are analyzed. The optimal ranges of these structural parameters are obtained. The mutual impact of these parameters is also discussed. The optimization method of these structure parameters is proposed at the end of this paper.

  6. Structural parameter optimization design for Halbach permanent maglev rail

    Energy Technology Data Exchange (ETDEWEB)

    Guo, F., E-mail: guofang19830119@163.co [R and D Center of Applied Superconductivity, Huazhong University of Science and Technology, Wuhan 430074 (China); Tang, Y.; Ren, L.; Li, J. [R and D Center of Applied Superconductivity, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2010-11-01

    Maglev rail is an important part of the magnetic levitation launch system. Reducing the manufacturing cost of magnetic levitation rail is the key problem for the development of magnetic levitation launch system. The Halbach permanent array has an advantage that the fundamental spatial field is cancelled on one side of the array while the field on the other side is enhanced. So this array used in the design of high temperature superconducting permanent maglev rail could improve the surface magnetic field and the levitation force. In order to make the best use of Nd-Fe-B (NdFeB) material and reduce the cost of maglev rail, the effect of the rail's structural parameters on levitation force and the utilization rate of NdFeB material are analyzed. The optimal ranges of these structural parameters are obtained. The mutual impact of these parameters is also discussed. The optimization method of these structure parameters is proposed at the end of this paper.

  7. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  8. Automated inference procedure for the determination of cell growth parameters

    Science.gov (United States)

    Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.

    2016-01-01

    The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.

  9. Optimization of process parameters for spark plasma sintering of nano structured SAF 2205 composite

    Directory of Open Access Journals (Sweden)

    Samuel Ranti Oke

    2018-04-01

    Full Text Available This research optimized spark plasma sintering (SPS process parameters in terms of sintering temperature, holding time and heating rate for the development of a nano-structured duplex stainless steel (SAF 2205 grade reinforced with titanium nitride (TiN. The mixed powders were sintered using an automated spark plasma sintering machine (model HHPD-25, FCT GmbH, Germany. Characterization was performed using X-ray diffraction and scanning electron microscopy. Density and hardness of the composites were investigated. The XRD result showed the formation of FeN0.068. SEM/EDS revealed the presence of nano ranged particles of TiN segregated at the grain boundaries of the duplex matrix. A decrease in hardness and densification was observed when sintering temperature and heating rate were 1200 °C and 150 °C/min respectively. The optimum properties were obtained in composites sintered at 1150 °C for 15 min and 100 °C/min. The composite grades irrespective of the process parameters exhibited similar shrinkage behavior, which is characterized by three distinctive peaks, which is an indication of good densification phenomena. Keywords: Spark plasma sintering, Duplex stainless steel (SAF 2205, Titanium nitride (TiN, Microstructure, Density, Hardness

  10. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  11. Parameter optimization of electrochemical machining process using black hole algorithm

    Science.gov (United States)

    Singh, Dinesh; Shukla, Rajkamal

    2017-12-01

    Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.

  12. Optimization of Key Parameters of Energy Management Strategy for Hybrid Electric Vehicle Using DIRECT Algorithm

    Directory of Open Access Journals (Sweden)

    Jingxian Hao

    2016-11-01

    Full Text Available The rule-based logic threshold control strategy has been frequently used in energy management strategies for hybrid electric vehicles (HEVs owing to its convenience in adjusting parameters, real-time performance, stability, and robustness. However, the logic threshold control parameters cannot usually ensure the best vehicle performance at different driving cycles and conditions. For this reason, the optimization of key parameters is important to improve the fuel economy, dynamic performance, and drivability. In principle, this is a multiparameter nonlinear optimization problem. The logic threshold energy management strategy for an all-wheel-drive HEV is comprehensively analyzed and developed in this study. Seven key parameters to be optimized are extracted. The optimization model of key parameters is proposed from the perspective of fuel economy. The global optimization method, DIRECT algorithm, which has good real-time performance, low computational burden, rapid convergence, is selected to optimize the extracted key parameters globally. The results show that with the optimized parameters, the engine operates more at the high efficiency range resulting into a fuel savings of 7% compared with non-optimized parameters. The proposed method can provide guidance for calibrating the parameters of the vehicle energy management strategy from the perspective of fuel economy.

  13. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  14. A multicriteria framework with voxel-dependent parameters for radiotherapy treatment plan optimization

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Uribe-Sanchez, Andres F.; Li, Nan; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To establish a new mathematical framework for radiotherapy treatment optimization with voxel-dependent optimization parameters. Methods: In the treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for a set of the objective functions associated to the organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. Furthermore, questions about the objective function selection and parameter adjustment to assure Pareto optimality as well as the relationship between the optimal solutions obtained from the organ-based and voxel-based models remain unanswered. To answer these questions, the authors establish in this work a new mathematical framework equipped with two theorems. Results: The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the impact of using different objective functions on plan qualities and Pareto surfaces. The main discoveries are threefold: (1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model since the Pareto surface is independent of the objective function selection and the entire Pareto surface could be generated as long as the objective function satisfies certain mathematical conditions; (2) All Pareto solutions generated by the organ-based model with different objective functions are parts of a unique Pareto surface generated by the voxel-based model with any appropriate objective function; (3) A much larger Pareto surface is explored by adjusting voxel-dependent parameters than by adjusting organ-dependent parameters, possibly

  15. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    Science.gov (United States)

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  16. ToTem: a tool for variant calling pipeline optimization.

    Science.gov (United States)

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  17. Automated capacitive spectrometer for measuring the parameters of deep centers in semiconductor materials

    International Nuclear Information System (INIS)

    Shajmeev, S.S.

    1985-01-01

    An automated capacitive spectrometer for determining deep centers parameters in semiconductor materials and instruments is described. The facility can be used in studying electrically active defects (impurity, radiation, thermal) having deep levels in the forbidden semiconductor zone. The facility permits to determine the following parameters of the deep centers: concentration of each deep level taken separately within 5x10 -1 +-5x10 -15 of the alloying impurity concentration, level energy position in the forbidden semiconductor zone in the range from 0.08 MeV above the valency zone ceiling to 0.08 eV below the conductivity zone bottom, hole or electron capture cross-section on the deep center; concentration profile of deep levels

  18. Optimization of machining parameters of turning operations based on multi performance criteria

    Directory of Open Access Journals (Sweden)

    N.K.Mandal

    2013-01-01

    Full Text Available The selection of optimum machining parameters plays a significant role to ensure quality of product, to reduce the manufacturing cost and to increase productivity in computer controlled manufacturing process. For many years, multi-objective optimization of turning based on inherent complexity of process is a competitive engineering issue. This study investigates multi-response optimization of turning process for an optimal parametric combination to yield the minimum power consumption, surface roughness and frequency of tool vibration using a combination of a Grey relational analysis (GRA. Confirmation test is conducted for the optimal machining parameters to validate the test result. Various turning parameters, such as spindle speed, feed and depth of cut are considered. Experiments are designed and conducted based on full factorial design of experiment.

  19. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    Energy Technology Data Exchange (ETDEWEB)

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  20. Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization

    Science.gov (United States)

    Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa

    2018-05-01

    In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.

  1. Parameter estimation for chaotic systems with a Drift Particle Swarm Optimization method

    International Nuclear Information System (INIS)

    Sun Jun; Zhao Ji; Wu Xiaojun; Fang Wei; Cai Yujie; Xu Wenbo

    2010-01-01

    Inspired by the motion of electrons in metal conductors in an electric field, we propose a variant of Particle Swarm Optimization (PSO), called Drift Particle Swarm Optimization (DPSO) algorithm, and apply it in estimating the unknown parameters of chaotic dynamic systems. The principle and procedure of DPSO are presented, and the algorithm is used to identify Lorenz system and Chen system. The experiment results show that for the given parameter configurations, DPSO can identify the parameters of the systems accurately and effectively, and it may be a promising tool for chaotic system identification as well as other numerical optimization problems in physics.

  2. Optimization of process and solution parameters in electrospinning polyethylene oxide

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-11-01

    Full Text Available This paper reports the optimization of electrospinning process and solution parameters using factorial design approach to obtain uniform polyethylene oxide (PEO) nanofibers. The parameters studied were distance between nozzle and collector screen...

  3. Hybrid Disease Diagnosis Using Multiobjective Optimization with Evolutionary Parameter Optimization

    Directory of Open Access Journals (Sweden)

    MadhuSudana Rao Nalluri

    2017-01-01

    Full Text Available With the widespread adoption of e-Healthcare and telemedicine applications, accurate, intelligent disease diagnosis systems have been profoundly coveted. In recent years, numerous individual machine learning-based classifiers have been proposed and tested, and the fact that a single classifier cannot effectively classify and diagnose all diseases has been almost accorded with. This has seen a number of recent research attempts to arrive at a consensus using ensemble classification techniques. In this paper, a hybrid system is proposed to diagnose ailments using optimizing individual classifier parameters for two classifier techniques, namely, support vector machine (SVM and multilayer perceptron (MLP technique. We employ three recent evolutionary algorithms to optimize the parameters of the classifiers above, leading to six alternative hybrid disease diagnosis systems, also referred to as hybrid intelligent systems (HISs. Multiple objectives, namely, prediction accuracy, sensitivity, and specificity, have been considered to assess the efficacy of the proposed hybrid systems with existing ones. The proposed model is evaluated on 11 benchmark datasets, and the obtained results demonstrate that our proposed hybrid diagnosis systems perform better in terms of disease prediction accuracy, sensitivity, and specificity. Pertinent statistical tests were carried out to substantiate the efficacy of the obtained results.

  4. Machine assisted reaction optimization: A self-optimizing reactor system for continuous-flow photochemical reactions

    KAUST Repository

    Poscharny, K.; Fabry, D.C.; Heddrich, S.; Sugiono, E.; Liauw, M.A.; Rueping, Magnus

    2018-01-01

    A methodology for the synthesis of oxetanes from benzophenone and furan derivatives is presented. UV-light irradiation in batch and flow systems allowed the [2 + 2] cycloaddition reaction to proceed and a broad range of oxetanes could be synthesized in manual and automated fashion. The identification of high-yielding reaction parameters was achieved through a new self-optimizing photoreactor system.

  5. Machine assisted reaction optimization: A self-optimizing reactor system for continuous-flow photochemical reactions

    KAUST Repository

    Poscharny, K.

    2018-04-07

    A methodology for the synthesis of oxetanes from benzophenone and furan derivatives is presented. UV-light irradiation in batch and flow systems allowed the [2 + 2] cycloaddition reaction to proceed and a broad range of oxetanes could be synthesized in manual and automated fashion. The identification of high-yielding reaction parameters was achieved through a new self-optimizing photoreactor system.

  6. Intelligent Mechatronics Systems for Transport Climate Parameters Optimization Using Fuzzy Logic Control

    OpenAIRE

    Beinarts, I; Ļevčenkovs, A; Kuņicina, N

    2007-01-01

    In article interest is concentrated on the climate parameters optimization in passengers’ salon of public electric transportation vehicles. The article presents mathematical problem for using intelligent agents in mechatronics problems for climate parameters optimal control. Idea is to use fuzzy logic and intelligent algorithms to create coordination mechanism for climate parameters control to save electrical energy, and it increases the level of comfort for passengers. A special interest for...

  7. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in

  8. Thermo-mechanical simulation and parameters optimization for beam blank continuous casting

    International Nuclear Information System (INIS)

    Chen, W.; Zhang, Y.Z.; Zhang, C.J.; Zhu, L.G.; Lu, W.G.; Wang, B.X.; Ma, J.H.

    2009-01-01

    The objective of this work is to optimize the process parameters of beam blank continuous casting in order to ensure high quality and productivity. A transient thermo-mechanical finite element model is developed to compute the temperature and stress profile in beam blank continuous casting. By comparing the calculated data with the metallurgical constraints, the key factors causing defects of beam blank can be found out. Then based on the subproblem approximation method, an optimization program is developed to search out the optimum cooling parameters. Those optimum parameters can make it possible to run the caster at its maximum productivity, minimum cost and to reduce the defects. Now, online verifying of this optimization project has been put in practice, which can prove that it is very useful to control the actual production

  9. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  10. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  11. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    Science.gov (United States)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  12. Automated magnetic divertor design for optimal power exhaust

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten

    2017-07-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation

  13. Automated magnetic divertor design for optimal power exhaust

    International Nuclear Information System (INIS)

    Blommaert, Maarten

    2017-01-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation. These flaws

  14. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  15. Optimization of plasma flow parameters of the magnetoplasma compressor

    International Nuclear Information System (INIS)

    Dojcinovic, I P; Kuraica, M M; Obradovc, B M; Cvetanovic, N; Puric, J

    2007-01-01

    Optimization of the working conditions of the magnetoplasma compressor (MPC) has been performed through analysing discharge and compression plasma flow parameters in hydrogen, nitrogen and argon at different pressures. Energy conversion rate, volt-ampere curve exponent and plasma flow velocities have been studied to optimize the efficiency of energy transfer from the supply source to the plasma. It has been found that the most effective energy transfer from the supply to the plasma is in hydrogen as a working gas at 1000 Pa pressure. It was found that the accelerating regime exists for hydrogen up to 3000 Pa pressures, in nitrogen up to 2000 Pa and in argon up to 1000 Pa pressure. At higher pressures MPC in all the gases works in the decelerating regime. At pressures lower than 200 Pa, high cathode erosion is observed. MPC plasma flow parameter optimization is very important because this plasma accelerating system may be of special interest for solid surface modification and other technology applications

  16. Mixed-integer evolution strategies for parameter optimization and their applications to medical image analysis

    NARCIS (Netherlands)

    Li, Rui

    2009-01-01

    The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of

  17. Parameter optimization method for longitudinal vibration absorber of ship shaft system

    Directory of Open Access Journals (Sweden)

    LIU Jinlin

    2017-05-01

    Full Text Available The longitudinal vibration of the ship shaft system is the one of the most important factors of hull stern vibration, and it can be effectively minimized by installing a longitudinal vibration absorber. In this way, the vibration and noise of ships can be brought under control. However, the parameters of longitudinal vibration absorbers have a great influence on the vibration characteristics of the shaft system. As such, a certain shafting testing platform was studied as the object on which a finite model was built, and the relationship between longitudinal stiffness and longitudinal vibration in the shaft system was analyzed in a straight alignment state. Furthermore, a longitudinal damping model of the shaft system was built in which the parameters of the vibration absorber were non-dimensionalized, the weight of the vibration absorber was set as a constant, and an optimizing algorithm was used to calculate the optimized stiffness and damping coefficient of the vibration absorber. Finally, the longitudinal vibration frequency response of the shafting testing platform before and after optimizing the parameters of the longitudinal vibration absorber were compared, and the results indicated that the longitudinal vibration of the shafting testing platform was decreased effectively, which suggests that it could provide a theoretical foundation for the parameter optimization of longitudinal vibration absorbers.

  18. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    Science.gov (United States)

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  19. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    Directory of Open Access Journals (Sweden)

    Huanqing Cui

    2017-03-01

    Full Text Available Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  20. Characterization and optimized control by means of multi-parameter controllers

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Carsten; Hoeg, S.; Thoegersen, A. (Dan-Ejendomme, Hellerup (Denmark)) (and others)

    2009-07-01

    Poorly functioning HVAC systems (Heating, Ventilation and Air Conditioning), but also separate heating, ventilation and air conditioning systems are costing the Danish society billions of kroner every year: partly because of increased energy consumption and high operational and maintenance costs, but mainly due to reduced productivity and absence due to illness because of a poor indoor climate. Typically, the operation of buildings and installations takes place today with traditional build-ing automation, which is characterised by 1) being based on static considerations 2) the individual sensor being coupled with one actuator/valve, i.e. the sensor's signal is only used in one place in the system 3) subsystems often being controlled independently of each other 4) the dynamics in building constructions and systems which is very important to the system and comfort regulation is not being considered. This, coupled with the widespread tendency to use large glass areas in the facades without sufficient sun shading, means that it is difficult to optimise comfort and energy consumption. Therefore, the last 10-20 years have seen a steady increase in the complaints of the indoor climate in Danish buildings and, at the same time, new buildings often turn out to be considerably higher energy consuming than expected. The purpose of the present project is to investigate the type of multi parameter sensors which may be generated for buildings and further to carry out a preliminary evaluation on how such multi parameter controllers may be utilized for optimal control of buildings. The aim of the project isn't to develop multi parameter controllers - this requires much more effort than possible in the present project. The aim is to show the potential of using multi parameter sensors when controlling buildings. For this purpose a larger office building has been chosen - an office building with at high energy demand and complaints regarding the indoor climate. In order to

  1. Parameter Optimization for Quantitative Signal-Concentration Mapping Using Spoiled Gradient Echo MRI

    Directory of Open Access Journals (Sweden)

    Gasser Hathout

    2012-01-01

    Full Text Available Rationale and Objectives. Accurate signal to tracer concentration maps are critical to quantitative MRI. The purpose of this study was to evaluate and optimize spoiled gradient echo (SPGR MR sequences for the use of gadolinium (Gd-DTPA as a kinetic tracer. Methods. Water-gadolinium phantoms were constructed for a physiologic range of gadolinium concentrations. Observed and calculated SPGR signal to concentration curves were generated. Using a percentage error determination, optimal pulse parameters for signal to concentration mapping were obtained. Results. The accuracy of the SPGR equation is a function of the chosen MR pulse parameters, particularly the time to repetition (TR and the flip angle (FA. At all experimental values of TR, increasing FA decreases the ratio between observed and calculated signals. Conversely, for a constant FA, increasing TR increases this ratio. Using optimized pulse parameter sets, it is possible to achieve excellent accuracy (approximately 5% over a physiologic range of concentration tracer concentrations. Conclusion. Optimal pulse parameter sets exist and their use is essential for deriving accurate signal to concentration curves in quantitative MRI.

  2. Identification of metabolic system parameters using global optimization methods

    Directory of Open Access Journals (Sweden)

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  3. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Multi-scale/multi-level geographic object-based image analysis (MS-GEOBIA methods are becoming widely-used in remote sensing because single-scale/single-level (SS-GEOBIA methods are often unable to obtain an accurate segmentation and classification of all land use/land cover (LULC types in an image. However, there have been few comparisons between SS-GEOBIA and MS-GEOBIA approaches for the purpose of mapping a specific LULC type, so it is not well understood which is more appropriate for this task. In addition, there are few methods for automating the selection of segmentation parameters for MS-GEOBIA, while manual selection (i.e., trial-and-error approach of parameters can be quite challenging and time-consuming. In this study, we examined SS-GEOBIA and MS-GEOBIA approaches for extracting residential areas in Landsat 8 imagery, and compared naïve and parameter-optimized segmentation approaches to assess whether unsupervised segmentation parameter optimization (USPO could improve the extraction of residential areas. Our main findings were: (i the MS-GEOBIA approaches achieved higher classification accuracies than the SS-GEOBIA approach, and (ii USPO resulted in more accurate MS-GEOBIA classification results while reducing the number of segmentation levels and classification variables considerably.

  4. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  5. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  6. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    Energy Technology Data Exchange (ETDEWEB)

    Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Aoki, Yuriko [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)

    2016-07-14

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  7. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    International Nuclear Information System (INIS)

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-01-01

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  8. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jayalal, M.L., E-mail: jayalal@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Kumar, L. Satish, E-mail: satish@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Jehadeesan, R., E-mail: jeha@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Rajeswari, S., E-mail: raj@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Satya Murty, S.A.V., E-mail: satya@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Balasubramaniyan, V.; Chetal, S.C. [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India)

    2011-10-15

    Highlights: > We model design optimization of a vital reactor component using Genetic Algorithm. > Real-parameter Genetic Algorithm is used for steam condenser optimization study. > Comparison analysis done with various Genetic Algorithm related mechanisms. > The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  9. Optimal Parameter Selection of Power System Stabilizer using Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyeng Hwan; Chung, Dong Il; Chung, Mun Kyu [Dong-AUniversity (Korea); Wang, Yong Peel [Canterbury Univeristy (New Zealand)

    1999-06-01

    In this paper, it is suggested that the selection method of optimal parameter of power system stabilizer (PSS) with robustness in low frequency oscillation for power system using real variable elitism genetic algorithm (RVEGA). The optimal parameters were selected in the case of power system stabilizer with one lead compensator, and two lead compensator. Also, the frequency responses characteristics of PSS, the system eigenvalues criterion and the dynamic characteristics were considered in the normal load and the heavy load, which proved usefulness of RVEGA compare with Yu's compensator design theory. (author). 20 refs., 15 figs., 8 tabs.

  10. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  11. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  12. Parameter Optimization and Electrode Improvement of Rotary Stepper Micromotor

    Science.gov (United States)

    Sone, Junji; Mizuma, Toshinari; Mochizuki, Shunsuke; Sarajlic, Edin; Yamahata, Christophe; Fujita, Hiroyuki

    We developed a three-phase electrostatic stepper micromotor and performed a numerical simulation to improve its performance for practical use and to optimize its design. We conducted its circuit simulation by simplifying its structure, and the effect of springback force generated by supported mechanism using flexures was considered. And we considered new improvement method for electrodes. This improvement and other parameter optimizations achieved the low voltage drive of micromotor.

  13. Sensitive parameters' optimization of the permanent magnet supporting mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yongguang; Gao, Xiaohui; Wang, Yixuan; Yang, Xiaowei [Beihang University, Beijing (China)

    2014-07-15

    The fast development of the ultra-high speed vertical rotor promotes the study and exploration for the supporting mechanism. It has become the focus of research that how to improve the speed and overcome the vibration when the rotors pass through the low-order critical frequencies. This paper introduces a kind of permanent magnet (PM) supporting mechanism and describes an optimization method of its sensitive parameters, which can make the vertical rotor system reach 80000 r/min smoothly. Firstly we find the sensitive parameters through analyzing the rotor's features in the process of achieving high-speed, then, study these sensitive parameters and summarize the regularities with the method of combining the experiment and the finite element method (FEM), at last, achieve the optimization method of these parameters. That will not only get a stable effect of raising speed and shorten the debugging time greatly, but also promote the extensive application of the PM supporting mechanism in the ultra-high speed vertical rotors.

  14. Luminosity Optimization Feedback in the SLC

    International Nuclear Information System (INIS)

    1999-01-01

    The luminosity optimization at the SLC has been limited by the precision with which one can measure the micron size beams at the Interaction Point. Ten independent tuning parameters must be adjusted. An automated application has been used to scan each parameter over a significant range and set the minimum beam size as measured with a beam-beam deflection scan. Measurement errors limited the accuracy of this procedure and degraded the resulting luminosity. A new luminosity optimization feedback system has been developed using novel dithering techniques to maximize the luminosity with respect to the 10 parameters, which are adjusted one at a time. Control devices are perturbed around nominal setpoints, while the averaged readout of a digitized luminosity monitor measurement is accumulated for each setting. Results are averaged over many pulses to achieve high precision and then fitted to determine the optimal setting. The dithering itself causes a small loss in luminosity, but the improved optimization is expected to significantly enhance the performance of the SLC. Commissioning results are reported

  15. A choice of the parameters of NPP steam generators on the basis of vector optimization

    International Nuclear Information System (INIS)

    Lemeshev, V.U.; Metreveli, D.G.

    1981-01-01

    The optimization problem of the parameters of the designed systems is considered as the problem of multicriterion optimization. It is proposed to choose non-dominant, optimal according to Pareto, parameters. An algorithm is built on the basis of the required and sufficient non-dominant conditions to find non-dominant solutions. This algorithm has been employed to solve the problem on a choice of optimal parameters for the counterflow shell-tube steam generator of NPP of BRGD type [ru

  16. The Pealization of the Most Economical and optimized Control System

    Institute of Scientific and Technical Information of China (English)

    WUBin

    2002-01-01

    In order to plow an access to low cost automation,the method to set up the most economical and optimized control system is studied.Such a system is achieved by adopting the field bus technologies based on net connection to form the hierarchical architecture and employing genetic algorithm to intelliently optimize the parameters of the topology structure at the field execution level and the parameters of a local controller,Praxios has proved that this realization can shorten the system development cycle,improve the systtem's reliability,and achieve conspicuous social economic benefits.

  17. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  18. Automated Robust Maneuver Design and Optimization

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is seeking improvements to the current technologies related to Position, Navigation and Timing. In particular, it is desired to automate precise maneuver...

  19. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    International Nuclear Information System (INIS)

    Jayalal, M.L.; Kumar, L. Satish; Jehadeesan, R.; Rajeswari, S.; Satya Murty, S.A.V.; Balasubramaniyan, V.; Chetal, S.C.

    2011-01-01

    Highlights: → We model design optimization of a vital reactor component using Genetic Algorithm. → Real-parameter Genetic Algorithm is used for steam condenser optimization study. → Comparison analysis done with various Genetic Algorithm related mechanisms. → The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  20. Experimental Verification of Statistically Optimized Parameters for Low-Pressure Cold Spray Coating of Titanium

    Directory of Open Access Journals (Sweden)

    Damilola Isaac Adebiyi

    2016-06-01

    Full Text Available The cold spray coating process involves many process parameters which make the process very complex, and highly dependent and sensitive to small changes in these parameters. This results in a small operational window of the parameters. Consequently, mathematical optimization of the process parameters is key, not only to achieving deposition but also improving the coating quality. This study focuses on the mathematical identification and experimental justification of the optimum process parameters for cold spray coating of titanium alloy with silicon carbide (SiC. The continuity, momentum and the energy equations governing the flow through the low-pressure cold spray nozzle were solved by introducing a constitutive equation to close the system. This was used to calculate the critical velocity for the deposition of SiC. In order to determine the input temperature that yields the calculated velocity, the distribution of velocity, temperature, and pressure in the cold spray nozzle were analyzed, and the exit values were predicted using the meshing tool of Solidworks. Coatings fabricated using the optimized parameters and some non-optimized parameters are compared. The coating of the CFD-optimized parameters yielded lower porosity and higher hardness.

  1. Sensitivity analysis of reactor safety parameters with automated adjoint function generation

    International Nuclear Information System (INIS)

    Kallfelz, J.M.; Horwedel, J.E.; Worley, B.A.

    1992-01-01

    A project at the Paul Scherrer Institute (PSI) involves the development of simulation models for the transient analysis of the reactors in Switzerland (STARS). This project, funded in part by the Swiss Federal Nuclear Safety Inspectorate, also involves the calculation and evaluation of certain transients for Swiss light water reactors (LWRs). For best-estimate analyses, a key element in quantifying reactor safety margins is uncertainty evaluation to determine the uncertainty in calculated integral values (responses) caused by modeling, calculational methodology, and input data (parameters). The work reported in this paper is a joint PSI/Oak Ridge National Laboratory (ORNL) application to a core transient analysis code of an ORNL software system for automated sensitivity analysis. The Gradient-Enhanced Software System (GRESS) is a software package that can in principle enhance any code so that it can calculate the sensitivity (derivative) to input parameters of any integral value (response) calculated in the original code. The studies reported are the first application of the GRESS capability to core neutronics and safety codes

  2. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  3. Artificial neural networks for automation of Rutherford backscattering spectroscopy experiments and data analysis

    International Nuclear Information System (INIS)

    Barradas, N.P.; Vieira, A.; Patricio, R.

    2002-01-01

    We present an algorithm based on artificial neural networks able to determine optimized experimental conditions for Rutherford backscattering measurements of Ge-implanted Si. The algorithm can be implemented for any other element implanted into a lighter substrate. It is foreseeable that the method developed in this work can be applied to still many other systems. The algorithm presented is a push-button black box, and does not require any human intervention. It is thus suited for automated control of an experimental setup, given an interface to the relevant hardware. Once the experimental conditions are optimized, the algorithm analyzes the final data obtained, and determines the desired parameters. The method is thus also suited for automated analysis of the data. The algorithm presented can be easily extended to other ion beam analysis techniques. Finally, it is suggested how the artificial neural networks required for automated control and analysis of experiments could be automatically generated. This would be suited for automated generation of the required computer code. Thus could RBS be done without experimentalists, data analysts, or programmers, with only technicians to keep the machines running

  4. Optimization of physico-chemical and nutritional parameters for ...

    African Journals Online (AJOL)

    Optimization of physico-chemical and nutritional parameters for pullulan production by a mutant of thermotolerant Aureobasidium pullulans, in fed batch ... minutes, having killing rate of 70% level, produced 6 g l-1 higher pullulan as compared to the wild type without loosing thermotolerant and non-melanin producing ability.

  5. Optimization of process parameters for synthesis of silica–Ni ...

    Indian Academy of Sciences (India)

    Optimization of process parameters for synthesis of silica–Ni nanocomposite by design of experiment ... Sol–gel; Ni; design of experiments; nanocomposites. ... Kolkata 700 032, India; Rustech Products Pvt. Ltd., Kolkata 700 045, India ...

  6. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  7. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    Science.gov (United States)

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  8. Optimization of TRPO process parameters for americium extraction from high level waste

    International Nuclear Information System (INIS)

    Chen Jing; Wang Jianchen; Song Chongli

    2001-01-01

    The numerical calculations for Am multistage fractional extraction by trialkyl phosphine oxide (TRPO) were verified by a hot test. 1750L/t-U high level waste (HLW) was used as the feed to the TRPO process. The analysis used the simple objective function to minimize the total waste content in the TRPO process streams. Some process parameters were optimized after other parameters were selected. The optimal process parameters for Am extraction by TRPO are: 10 stages for extraction and 2 stages for scrubbing; a flow rate ratio of 0.931 for extraction and 4.42 for scrubbing; nitric acid concentration of 1.35 mol/L for the feed and 0.5 mol/L for the scrubbing solution. Finally, the nitric acid and Am concentration profiles in the optimal TRPO extraction process are given

  9. Automated scheme to determine design parameters for a recoverable reentry vehicle

    International Nuclear Information System (INIS)

    Williamson, W.E.

    1976-01-01

    The NRV (Nosetip Recovery Vehicle) program at Sandia Laboratories is designed to recover the nose section from a sphere cone reentry vehicle after it has flown a near ICBM reentry trajectory. Both mass jettison and parachutes are used to reduce the velocity of the RV near the end of the trajectory to a sufficiently low level that the vehicle may land intact. The design problem of determining mass jettison time and parachute deployment time in order to ensure that the vehicle does land intact is considered. The problem is formulated as a min-max optimization problem where the design parameters are to be selected to minimize the maximum possible deviation in the design criteria due to uncertainties in the system. The results of the study indicate that the optimal choice of the design parameters ensures that the maximum deviation in the design criteria is within acceptable bounds. This analytically ensures the feasibility of recovery for NRV

  10. AI-guided parameter optimization in inverse treatment planning

    International Nuclear Information System (INIS)

    Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho

    2003-01-01

    An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach

  11. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    Science.gov (United States)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-06-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  12. Optimization of nonlinear wave function parameters

    International Nuclear Information System (INIS)

    Shepard, R.; Minkoff, M.; Chemistry

    2006-01-01

    An energy-based optimization method is presented for our recently developed nonlinear wave function expansion form for electronic wave functions. This expansion form is based on spin eigenfunctions, using the graphical unitary group approach (GUGA). The wave function is expanded in a basis of product functions, allowing application to closed-shell and open-shell systems and to ground and excited electronic states. Each product basis function is itself a multiconfigurational function that depends on a relatively small number of nonlinear parameters called arc factors. The energy-based optimization is formulated in terms of analytic arc factor gradients and orbital-level Hamiltonian matrices that correspond to a specific kind of uncontraction of each of the product basis functions. These orbital-level Hamiltonian matrices give an intuitive representation of the energy in terms of disjoint subsets of the arc factors, they provide for an efficient computation of gradients of the energy with respect to the arc factors, and they allow optimal arc factors to be determined in closed form for subspaces of the full variation problem. Timings for energy and arc factor gradient computations involving expansion spaces of > 10 24 configuration state functions are reported. Preliminary convergence studies and molecular dissociation curves are presented for some small molecules

  13. Integrating Test-Form Formatting into Automated Test Assembly

    Science.gov (United States)

    Diao, Qi; van der Linden, Wim J.

    2013-01-01

    Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…

  14. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  15. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  16. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  17. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  18. Parameter optimization for reproducible cardiac 1 H-MR spectroscopy at 3 Tesla.

    Science.gov (United States)

    de Heer, Paul; Bizino, Maurice B; Lamb, Hildo J; Webb, Andrew G

    2016-11-01

    To optimize data acquisition parameters in cardiac proton MR spectroscopy, and to evaluate the intra- and intersession variability in myocardial triglyceride content. Data acquisition parameters at 3 Tesla (T) were optimized and reproducibility measured using, in total, 49 healthy subjects. The signal-to-noise-ratio (SNR) and the variance in metabolite amplitude between averages were measured for: (i) global versus local power optimization; (ii) static magnetic field (B 0 ) shimming performed during free-breathing or within breathholds; (iii) post R-wave peak measurement times between 50 and 900 ms; (iv) without respiratory compensation, with breathholds and with navigator triggering; and (v) frequency selective excitation, Chemical Shift Selective (CHESS) and Multiply Optimized Insensitive Suppression Train (MOIST) water suppression techniques. Using the optimized parameters intra- and intersession myocardial triglyceride content reproducibility was measured. Two cardiac proton spectra were acquired with the same parameters and compared (intrasession reproducibility) after which the subject was removed from the scanner and placed back in the scanner and a third spectrum was acquired which was compared with the first measurement (intersession reproducibility). Local power optimization increased SNR on average by 22% compared with global power optimization (P = 0.0002). The average linewidth was not significantly different for pencil beam B 0 shimming using free-breathing or breathholds (19.1 Hz versus 17.5 Hz; P = 0.15). The highest signal stability occurred at a cardiac trigger delay around 240 ms. The mean amplitude variation was significantly lower for breathholds versus free-breathing (P = 0.03) and for navigator triggering versus free-breathing (P = 0.03) as well as for navigator triggering versus breathhold (P = 0.02). The mean residual water signal using CHESS (1.1%, P = 0.01) or MOIST (0.7%, P = 0.01) water suppression was significantly lower than using

  19. Optimization of cryogenic cooled EDM process parameters using grey relational analysis

    International Nuclear Information System (INIS)

    Kumar, S Vinoth; Kumar, M Pradeep

    2014-01-01

    This paper presents an experimental investigation on cryogenic cooling of liquid nitrogen (LN 2 ) copper electrode in the electrical discharge machining (EDM) process. The optimization of the EDM process parameters, such as the electrode environment (conventional electrode and cryogenically cooled electrode in EDM), discharge current, pulse on time, gap voltage on material removal rate, electrode wear, and surface roughness on machining of AlSiCp metal matrix composite using multiple performance characteristics on grey relational analysis was investigated. The L 18 orthogonal array was utilized to examine the process parameters, and the optimal levels of the process parameters were identified through grey relational analysis. Experimental data were analyzed through analysis of variance. Scanning electron microscopy analysis was conducted to study the characteristics of the machined surface.

  20. The optimal extraction parameters and anti-diabetic activity of ...

    African Journals Online (AJOL)

    diabetic activity of FIBL on alloxan induced diabetic mice were studied. The optimal extraction parameters of FIBL were obtained by single factor test and orthogonal test, as follows: ethanol concentration 60 %, ratio of solvent to raw material 30 ...

  1. Optimizing chirped laser pulse parameters for electron acceleration in vacuum

    Energy Technology Data Exchange (ETDEWEB)

    Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza; Massudi, Reza, E-mail: r-massudi@sbu.ac.ir [Laser and Plasma Research Institute, Shahid Beheshti University, Tehran 1983969411 (Iran, Islamic Republic of)

    2015-11-14

    Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.

  2. A parameter estimation for DC servo motor by using optimization process

    International Nuclear Information System (INIS)

    Arjoni Amir

    2010-01-01

    Modeling and simulation parameters of DC servo motor using Matlab Simulink software have been done. The objective to define the DC servo motor parameter estimation is to get DC servo motor parameter values (B, La, Ra, Km, J) which are significant value that can be used for actuation process of control systems. In the analysis of control systems DC the servo motor expressed by transfer function equation to make faster to be analyzed as a component of the actuator. To obtain the data model parameters and initial conditions of the DC servo motors is then carried out the processor modeling and simulation in which the DC servo motor combined with other components. To obtain preliminary data of the DC servo motor parameters as estimated venue, it is obtained from the data factory of the DC servo motor. The initial data parameters of the DC servo motor are applied for the optimization process by using nonlinear least square algorithm and minimize the cost function value so that the DC servo motors parameter values are obtained significantly. The result of the optimization process of the DC servo motor parameter values are B = 0.039881, J= 1.2608e-007, Km = 0.069648, La = 2.3242e-006 and Ra = 1.8837. (author)

  3. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  4. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R. Venkata; Rai, Dhiraj P.

    2017-01-01

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  5. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Rao, R. Venkata; Rai, Dhiraj P. [Sardar Vallabhbhai National Institute of Technology, Gujarat (India)

    2017-05-15

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  6. Comparisons of criteria in the assessment model parameter optimizations

    International Nuclear Information System (INIS)

    Liu Xinhe; Zhang Yongxing

    1993-01-01

    Three criteria (chi square, relative chi square and correlation coefficient) used in model parameter optimization (MPO) process that aims at significant reduction of prediction uncertainties were discussed and compared to each other with the aid of a well-controlled tracer experiment

  7. Parameter evaluation and fully-automated radiosynthesis of [(11)C]harmine for imaging of MAO-A for clinical trials.

    Science.gov (United States)

    Philippe, C; Zeilinger, M; Mitterhauser, M; Dumanic, M; Lanzenberger, R; Hacker, M; Wadsak, W

    2015-03-01

    The aim of the present study was the evaluation and automation of the radiosynthesis of [(11)C]harmine for clinical trials. The following parameters have been investigated: amount of base, precursor concentration, solvent, reaction temperature and time. The optimum reaction conditions were determined to be 2-3mg/mL precursor activated with 1eq. 5M NaOH in DMSO, 80°C reaction temperature and 2min reaction time. Under these conditions 6.1±1GBq (51.0±11% based on [(11)C]CH3I, corrected for decay) of [(11)C]harmine (n=72) were obtained. The specific activity was 101.32±28.2GBq/µmol (at EOS). All quality control parameters were in accordance with the standards for parenteral human application. Due to its reliability and high yields, this fully-automated synthesis method can be used as routine set-up. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Engineering systems for novel automation methods

    International Nuclear Information System (INIS)

    Fischer, H.D.

    1997-01-01

    Modern automation methods of Optimal Control, or for state reconstruction or parameter identification, require a discrete dynamic path model. This is established among others by time and location discretisation of a system of partial differential equations. The digital wave filter principle is paricularly suitable for this purpose, since the numeric stability of the derived algorithms can be easily guaranteed, and their robustness as to effects of word length limitations can be proven. This principle is also particularly attractive in that it can be excellently integrated into currently existing engineering systems for instrumentation and control. (orig./CB) [de

  9. WARACS: Wrappers to Automate the Reconstruction of Ancestral Character States.

    Science.gov (United States)

    Gruenstaeudl, Michael

    2016-02-01

    Reconstructions of ancestral character states are among the most widely used analyses for evaluating the morphological, cytological, or ecological evolution of an organismic lineage. The software application Mesquite remains the most popular application for such reconstructions among plant scientists, even though its support for automating complex analyses is limited. A software tool is needed that automates the reconstruction and visualization of ancestral character states with Mesquite and similar applications. A set of command line-based Python scripts was developed that (a) communicates standardized input to and output from the software applications Mesquite, BayesTraits, and TreeGraph2; (b) automates the process of ancestral character state reconstruction; and (c) facilitates the visualization of reconstruction results. WARACS provides a simple tool that streamlines the reconstruction and visualization of ancestral character states over a wide array of parameters, including tree distribution, character state, and optimality criterion.

  10. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm

    International Nuclear Information System (INIS)

    Oliva, Diego; Abd El Aziz, Mohamed; Ella Hassanien, Aboul

    2017-01-01

    Highlights: •We modify the whale algorithm using chaotic maps. •We apply a chaotic algorithm to estimate parameter of photovoltaic cells. •We perform a study of chaos in whale algorithm. •Several comparisons and metrics support the experimental results. •We test the method with data from real solar cells. -- Abstract: The using of solar energy has been increased since it is a clean source of energy. In this way, the design of photovoltaic cells has attracted the attention of researchers over the world. There are two main problems in this field: having a useful model to characterize the solar cells and the absence of data about photovoltaic cells. This situation even affects the performance of the photovoltaic modules (panels). The characteristics of the current vs. voltage are used to describe the behavior of solar cells. Considering such values, the design problem involves the solution of the complex non-linear and multi-modal objective functions. Different algorithms have been proposed to identify the parameters of the photovoltaic cells and panels. Most of them commonly fail in finding the optimal solutions. This paper proposes the Chaotic Whale Optimization Algorithm (CWOA) for the parameters estimation of solar cells. The main advantage of the proposed approach is using the chaotic maps to compute and automatically adapt the internal parameters of the optimization algorithm. This situation is beneficial in complex problems, because along the iterative process, the proposed algorithm improves their capabilities to search for the best solution. The modified method is able to optimize complex and multimodal objective functions. For example, the function for the estimation of parameters of solar cells. To illustrate the capabilities of the proposed algorithm in the solar cell design, it is compared with other optimization methods over different datasets. Moreover, the experimental results support the improved performance of the proposed approach

  11. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  12. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    Science.gov (United States)

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  13. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  14. Optimization of injection molding process parameters for a plastic cell phone housing component

    Science.gov (United States)

    Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya

    2016-11-01

    To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.

  15. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  16. Optimization of CNC end milling process parameters using PCA ...

    African Journals Online (AJOL)

    Optimization of CNC end milling process parameters using PCA-based Taguchi method. ... International Journal of Engineering, Science and Technology ... To meet the basic assumption of Taguchi method; in the present work, individual response correlations have been eliminated first by means of Principal Component ...

  17. Optimization of Indoor Thermal Comfort Parameters with the Adaptive Network-Based Fuzzy Inference System and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Li

    2017-01-01

    Full Text Available The goal of this study is to improve thermal comfort and indoor air quality with the adaptive network-based fuzzy inference system (ANFIS model and improved particle swarm optimization (PSO algorithm. A method to optimize air conditioning parameters and installation distance is proposed. The methodology is demonstrated through a prototype case, which corresponds to a typical laboratory in colleges and universities. A laboratory model is established, and simulated flow field information is obtained with the CFD software. Subsequently, the ANFIS model is employed instead of the CFD model to predict indoor flow parameters, and the CFD database is utilized to train ANN input-output “metamodels” for the subsequent optimization. With the improved PSO algorithm and the stratified sequence method, the objective functions are optimized. The functions comprise PMV, PPD, and mean age of air. The optimal installation distance is determined with the hemisphere model. Results show that most of the staff obtain a satisfactory degree of thermal comfort and that the proposed method can significantly reduce the cost of building an experimental device. The proposed methodology can be used to determine appropriate air supply parameters and air conditioner installation position for a pleasant and healthy indoor environment.

  18. The primary ion source for construction and optimization of operation parameters

    International Nuclear Information System (INIS)

    Synowiecki, A.; Gazda, E.

    1986-01-01

    The construction of primary ion source for SIMS has been presented. The influence of individual operation parameters on the properties of ion source has been investigated. Optimization of these parameters has allowed to appreciate usefulness of the ion source for SIMS study. 14 refs., 8 figs., 2 tabs. (author)

  19. Optimization of virtual source parameters in neutron scattering instrumentation

    International Nuclear Information System (INIS)

    Habicht, K; Skoulatos, M

    2012-01-01

    We report on phase-space optimizations for neutron scattering instruments employing horizontal focussing crystal optics. Defining a figure of merit for a generic virtual source configuration we identify a set of optimum instrumental parameters. In order to assess the quality of the instrumental configuration we combine an evolutionary optimization algorithm with the analytical Popovici description using multidimensional Gaussian distributions. The optimum phase-space element which needs to be delivered to the virtual source by preceding neutron optics may be obtained using the same algorithm which is of general interest in instrument design.

  20. A Transistor Sizing Tool for Optimization of Analog CMOS Circuits: TSOp

    OpenAIRE

    Y.C.Wong; Syafeeza A. R; N. A. Hamid

    2015-01-01

    Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. It is highly desirable to automate the transistor sizing process towards being able to rapidly design high performance integrated circuit. Presented here is a simple but effective algorithm for automatically optimizing the circuit parameters by exploiting the relationships among the genetic algorithm's coefficient values derived from the analog circuit desig...

  1. Application of Factorial Design for Gas Parameter Optimization in CO2 Laser Welding

    DEFF Research Database (Denmark)

    Gong, Hui; Dragsted, Birgitte; Olsen, Flemming Ove

    1997-01-01

    The effect of different gas process parameters involved in CO2 laser welding has been studied by applying two-set of three-level complete factorial designs. In this work 5 gas parameters, gas type, gas flow rate, gas blowing angle, gas nozzle diameter, gas blowing point-offset, are optimized...... to be a very useful tool for parameter optimi-zation in laser welding process. Keywords: CO2 laser welding, gas parameters, factorial design, Analysis of Variance........ The bead-on-plate welding specimens are evaluated by a number of quality char-acteristics, such as the penetration depth and the seam width. The significance of the gas pa-rameters and their interactions are based on the data found by the Analysis of Variance-ANOVA. This statistic methodology is proven...

  2. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  3. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  4. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  5. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    Science.gov (United States)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous

  6. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  7. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  8. Automation of data processing and calculation of retention parameters and thermodynamic data for gas chromatography

    Science.gov (United States)

    Makarycheva, A. I.; Faerman, V. A.

    2017-02-01

    The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.

  9. Optimizing parameters of a technical system using quality function deployment method

    Science.gov (United States)

    Baczkowicz, M.; Gwiazda, A.

    2015-11-01

    The article shows the practical use of Quality Function Deployment (QFD) on the example of a mechanized mining support. Firstly it gives a short description of this method and shows how the designing process, from the constructor point of view, looks like. The proposed method allows optimizing construction parameters and comparing them as well as adapting to customer requirements. QFD helps to determine the full set of crucial construction parameters and then their importance and difficulty of their execution. Secondly it shows chosen technical system and presents its construction with figures of the existing and future optimized model. The construction parameters were selected from the designer point of view. The method helps to specify a complete set of construction parameters, from the point of view, of the designed technical system and customer requirements. The QFD matrix can be adjusted depending on designing needs and not every part of it has to be considered. Designers can choose which parts are the most important. Due to this QFD can be a very flexible tool. The most important is to define relationships occurring between parameters and that part cannot be eliminated from the analysis.

  10. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  11. Model Optimization Identification Method Based on Closed-loop Operation Data and Process Characteristics Parameters

    Directory of Open Access Journals (Sweden)

    Zhiqiang GENG

    2014-01-01

    Full Text Available Output noise is strongly related to input in closed-loop control system, which makes model identification of closed-loop difficult, even unidentified in practice. The forward channel model is chosen to isolate disturbance from the output noise to input, and identified by optimization the dynamic characteristics of the process based on closed-loop operation data. The characteristics parameters of the process, such as dead time and time constant, are calculated and estimated based on the PI/PID controller parameters and closed-loop process input/output data. And those characteristics parameters are adopted to define the search space of the optimization identification algorithm. PSO-SQP optimization algorithm is applied to integrate the global search ability of PSO with the local search ability of SQP to identify the model parameters of forward channel. The validity of proposed method has been verified by the simulation. The practicability is checked with the PI/PID controller parameter turning based on identified forward channel model.

  12. Application-Oriented Optimal Shift Schedule Extraction for a Dual-Motor Electric Bus with Automated Manual Transmission

    Directory of Open Access Journals (Sweden)

    Mingjie Zhao

    2018-02-01

    Full Text Available The conventional battery electric buses (BEBs have limited potential to optimize the energy consumption and reach a better dynamic performance. A practical dual-motor equipped with 4-speed Automated Manual Transmission (AMT propulsion system is proposed, which can eliminate the traction interruption in conventional AMT. A discrete model of the dual-motor-AMT electric bus (DMAEB is built and used to optimize the gear shift schedule. Dynamic programming (DP algorithm is applied to find the optimal results where the efficiency and shift time of each gear are considered to handle the application problem of global optimization. A rational penalty factor and a proper shift time delay based on bench test results are set to reduce the shift frequency by 82.5% in Chinese-World Transient Vehicle Cycle (C-WTVC. Two perspectives of applicable shift rule extraction methods, i.e., the classification method based on optimal operating points and clustering method based on optimal shifting points, are explored and compared. Eventually, the hardware-in-the-loop (HIL simulation results demonstrate that the proposed structure and extracted shift schedule can realize a significant improvement in reducing energy loss by 20.13% compared to traditional empirical strategies.

  13. Optimization of geometric parameters of heat exchange pipes pin finning

    Science.gov (United States)

    Akulov, K. A.; Golik, V. V.; Voronin, K. S.; Zakirzakov, A. G.

    2018-05-01

    The work is devoted to optimization of geometric parameters of the pin finning of heat-exchanging pipes. Pin fins were considered from the point of view of mechanics of a deformed solid body as overhang beams with a uniformly distributed load. It was found out under what geometric parameters of the nib (diameter and length); the stresses in it from the influence of the washer fluid will not exceed the yield strength of the material (aluminum). Optimal values of the geometric parameters of nibs were obtained for different velocities of the medium washed by them. As a flow medium, water and air were chosen, and the cross section of the nibs was round and square. Pin finning turned out to be more than 3 times more compact than circumferential finning, so its use makes it possible to increase the number of fins per meter of the heat-exchanging pipe. And it is well-known that this is the main method for increasing the heat transfer of a convective surface, giving them an indisputable advantage.

  14. Fault detection of feed water treatment process using PCA-WD with parameter optimization.

    Science.gov (United States)

    Zhang, Shirong; Tang, Qian; Lin, Yu; Tang, Yuling

    2017-05-01

    Feed water treatment process (FWTP) is an essential part of utility boilers; and fault detection is expected for its reliability improvement. Classical principal component analysis (PCA) has been applied to FWTPs in our previous work; however, the noises of T 2 and SPE statistics result in false detections and missed detections. In this paper, Wavelet denoise (WD) is combined with PCA to form a new algorithm, (PCA-WD), where WD is intentionally employed to deal with the noises. The parameter selection of PCA-WD is further formulated as an optimization problem; and PSO is employed for optimization solution. A FWTP, sustaining two 1000MW generation units in a coal-fired power plant, is taken as a study case. Its operation data is collected for following verification study. The results show that the optimized WD is effective to restrain the noises of T 2 and SPE statistics, so as to improve the performance of PCA-WD algorithm. And, the parameter optimization enables PCA-WD to get its optimal parameters in an automatic way rather than on individual experience. The optimized PCA-WD is further compared with classical PCA and sliding window PCA (SWPCA), in terms of four cases as bias fault, drift fault, broken line fault and normal condition, respectively. The advantages of the optimized PCA-WD, against classical PCA and SWPCA, is finally convinced with the results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    International Nuclear Information System (INIS)

    Belwanshi, Vinod; Topkar, Anita

    2016-01-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  16. Optimization of design parameters for bulk micromachined silicon membranes for piezoresistive pressure sensing application

    Science.gov (United States)

    Belwanshi, Vinod; Topkar, Anita

    2016-05-01

    Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.

  17. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of a measured program devoted to parameter identification of structural dynamic systems is considered, the design problem is formulated as an optimization problem due to minimize the total expected cost of the measurement program. All the calculations are based on a priori knowledge...... and engineering judgement. One of the contribution of the approach is that the optimal nmber of sensors can be estimated. This is sown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program for estimating the modal damping parameters...

  18. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  19. Multiobjective Optimization of Turning Cutting Parameters for J-Steel Material

    Directory of Open Access Journals (Sweden)

    Adel T. Abbas

    2016-01-01

    Full Text Available This paper presents a multiobjective optimization study of cutting parameters in turning operation for a heat-treated alloy steel material (J-Steel with Vickers hardness in the range of HV 365–395 using uncoated, unlubricated Tungsten-Carbide tools. The primary aim is to identify proper settings of the cutting parameters (cutting speed, feed rate, and depth of cut that lead to reasonable compromises between good surface quality and high material removal rate. Thorough exploration of the range of cutting parameters was conducted via a five-level full-factorial experimental matrix of samples and the Pareto trade-off frontier is identified. The trade-off among the objectives was observed to have a “knee” shape, in which certain settings for the cutting parameters can achieve both good surface quality and high material removal rate within certain limits. However, improving one of the objectives beyond these limits can only happen at the expense of a large compromise in the other objective. An alternative approach for identifying the trade-off frontier was also tested via multiobjective implementation of the Efficient Global Optimization (m-EGO algorithm. The m-EGO algorithm was successful in identifying two points within the good range of the trade-off frontier with 36% fewer experimental samples.

  20. Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model

    Science.gov (United States)

    Keum, H.; Han, K.; Kim, H.; Ha, C.

    2017-12-01

    The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various

  1. Statistical optimization of process parameters for the production of ...

    African Journals Online (AJOL)

    In this study, optimization of process parameters such as moisture content, incubation temperature and initial pH (fixed) for the improvement of citric acid production from oil palm empty fruit bunches through solid state bioconversion was carried out using traditional one-factor-at-a-time (OFAT) method and response surface ...

  2. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    Science.gov (United States)

    Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.

    2013-12-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value

  3. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    International Nuclear Information System (INIS)

    Zhang, L; Zhuge, W L; Liu, S J; Zhang, Y J; Peng, J

    2013-01-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (W R and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β 6 , the exit tip radius R 6t and hub radius R 6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency η ts , and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency η ts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the

  4. Density-based penalty parameter optimization on C-SVM.

    Science.gov (United States)

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  5. Intermolecular Force Field Parameters Optimization for Computer Simulations of CH4 in ZIF-8

    Directory of Open Access Journals (Sweden)

    Phannika Kanthima

    2016-01-01

    Full Text Available The differential evolution (DE algorithm is applied for obtaining the optimized intermolecular interaction parameters between CH4 and 2-methylimidazolate ([C4N2H5]− using quantum binding energies of CH4-[C4N2H5]− complexes. The initial parameters and their upper/lower bounds are obtained from the general AMBER force field. The DE optimized and the AMBER parameters are then used in the molecular dynamics (MD simulations of CH4 molecules in the frameworks of ZIF-8. The results show that the DE parameters are better for representing the quantum interaction energies than the AMBER parameters. The dynamical and structural behaviors obtained from MD simulations with both sets of parameters are also of notable differences.

  6. Efficiency Optimization Control of IPM Synchronous Motor Drives with Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Sadegh Vaez-Zadeh

    2011-04-01

    Full Text Available This paper describes an efficiency optimization control method for high performance interior permanent magnet synchronous motor drives with online estimation of motor parameters. The control system is based on an input-output feedback linearization method which provides high performance control and simultaneously ensures the minimization of the motor losses. The controllable electrical loss can be minimized by the optimal control of the armature current vector. It is shown that parameter variations except at near the nominal conditions have undesirable effect on the controller performance. Therefore, a parameter estimation method based on the second method of Lyapunov is presented which guarantees the stability and convergence of the estimation. The extensive simulation results show the feasibility of the proposed controller and observer and their desirable performances.

  7. Grinding Parameter Optimization of Ultrasound-Aided Electrolytic in Process Dressing for Finishing Nanocomposite Ceramics

    Directory of Open Access Journals (Sweden)

    Fan Chen

    2016-01-01

    Full Text Available In order to achieve the precision and efficient processing of nanocomposite ceramics, the ultrasound-aided electrolytic in process dressing method was proposed. But how to realize grinding parameter optimization, that is, the maximum processing efficiency, on the premise of the assurance of best workpiece quality is a problem that needs to be solved urgently. Firstly, this research investigated the influence of grinding parameters on material removal rate and critical ductile depth, and their mathematic models based on the existing models were developed to simulate the material removal process. Then, on the basis of parameter sensitivity analysis based on partial derivative, the sensitivity models of material removal rates on grinding parameter were established and computed quantitatively by MATLAB, and the key grinding parameter for optimal grinding process was found. Finally, the theoretical analyses were verified by experiments: the material removal rate increases with the increase of grinding parameters, including grinding depth (ap, axial feeding speed (fa, workpiece speed (Vw, and wheel speed (Vs; the parameter sensitivity of material removal rate was in a descending order as ap>fa>Vw>Vs; the most sensitive parameter (ap was optimized and it was found that the better machining result has been obtained when ap was about 3.73 μm.

  8. Warpage improvement on wheel caster by optimizing the process parameters using genetic algorithm (GA)

    Science.gov (United States)

    Safuan, N. S.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    In injection moulding process, the defects will always encountered and affected the final product shape and functionality. This study is concerning on minimizing warpage and optimizing the process parameter of injection moulding part. Apart from eliminating product wastes, this project also giving out best recommended parameters setting. This research studied on five parameters. The optimization showed that warpage have been improved 42.64% from 0.6524 mm to 0.30879 mm in Autodesk Moldflow Insight (AMI) simulation result and Genetic Algorithm (GA) respectively.

  9. Optimization of the isotope separation in columns

    International Nuclear Information System (INIS)

    Kaminskij, V.A.; Vetsko, V.M.; Tevzadze, G.A.; Devdariani, O.A.; Sulaberidze, G.A.

    1982-01-01

    The general method for the multi-parameter optimization of cascade plants of packed columns is proposed. As an optimization effectiveness function a netcost of the isotopic product is selected. The net cost is comprehensively characterizing the sum total of capital costs for manufacturing the products as well as determining the choice of the most effective directions for capital investments and rational limits of improvement of the products quality. The method is based on main representations of the cascade theory, such as the ideal flow profile and form efficiency as well as mathematical model of the packed column specifying the bonds between its geometric and operating parameters. As a result, the isotopic products cost function could be bound with such parameters as the equilibrium stage height, ultimate packing capacity, its element dimensions, column diameter. It is concluded that the suggested approach to the optimization of isotope separation processes is rather a general one. It permits to solve a number of special problems, such as estimation of advisability of using heat-pump circuits and determining the rational automation level. Besides, by means of the method suggested one can optimize the process conditions with regard to temperature and pressure

  10. TH-AB-BRA-02: Automated Triplet Beam Orientation Optimization for MRI-Guided Co-60 Radiotherapy

    International Nuclear Information System (INIS)

    Nguyen, D; Thomas, D; Cao, M; O’Connor, D; Lamb, J; Sheng, K

    2016-01-01

    Purpose: MRI guided Co-60 provides daily and intrafractional MRI soft tissue imaging for improved target tracking and adaptive radiotherapy. To remedy the low output limitation, the system uses three Co-60 sources at 120° apart, but using all three sources in planning is considerably unintuitive. We automate the beam orientation optimization using column generation, and then solve a novel fluence map optimization (FMO) problem while regularizing the number of MLC segments. Methods: Three patients—1 prostate (PRT), 1 lung (LNG), and 1 head-and-neck boost plan (H&NBoost)—were evaluated. The beamlet dose for 180 equally spaced coplanar beams under 0.35 T magnetic field was calculated using Monte Carlo. The 60 triplets were selected utilizing the column generation algorithm. The FMO problem was formulated using an L2-norm minimization with anisotropic total variation (TV) regularization term, which allows for control over the number of MLC segments. Our Fluence Regularized and Optimized Selection of Triplets (FROST) plans were compared against the clinical treatment plans (CLN) produced by an experienced dosimetrist. Results: The mean PTV D95, D98, and D99 differ by −0.02%, +0.12%, and +0.44% of the prescription dose between planning methods, showing same PTV dose coverage. The mean PTV homogeneity (D95/D5) was at 0.9360 (FROST) and 0.9356 (CLN). R50 decreased by 0.07 with FROST. On average, FROST reduced Dmax and Dmean of OARs by 6.56% and 5.86% of the prescription dose. The manual CLN planning required iterative trial and error runs which is very time consuming, while FROST required minimal human intervention. Conclusions: MRI guided Co-60 therapy needs the output of all sources yet suffers from unintuitive and laborious manual beam selection processes. Automated triplet orientation optimization is shown essential to overcome the difficulty and improves the dosimetry. A novel FMO with regularization provides additional controls over the number of MLC segments

  11. TH-AB-BRA-02: Automated Triplet Beam Orientation Optimization for MRI-Guided Co-60 Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D; Thomas, D; Cao, M; O’Connor, D; Lamb, J; Sheng, K [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: MRI guided Co-60 provides daily and intrafractional MRI soft tissue imaging for improved target tracking and adaptive radiotherapy. To remedy the low output limitation, the system uses three Co-60 sources at 120° apart, but using all three sources in planning is considerably unintuitive. We automate the beam orientation optimization using column generation, and then solve a novel fluence map optimization (FMO) problem while regularizing the number of MLC segments. Methods: Three patients—1 prostate (PRT), 1 lung (LNG), and 1 head-and-neck boost plan (H&NBoost)—were evaluated. The beamlet dose for 180 equally spaced coplanar beams under 0.35 T magnetic field was calculated using Monte Carlo. The 60 triplets were selected utilizing the column generation algorithm. The FMO problem was formulated using an L2-norm minimization with anisotropic total variation (TV) regularization term, which allows for control over the number of MLC segments. Our Fluence Regularized and Optimized Selection of Triplets (FROST) plans were compared against the clinical treatment plans (CLN) produced by an experienced dosimetrist. Results: The mean PTV D95, D98, and D99 differ by −0.02%, +0.12%, and +0.44% of the prescription dose between planning methods, showing same PTV dose coverage. The mean PTV homogeneity (D95/D5) was at 0.9360 (FROST) and 0.9356 (CLN). R50 decreased by 0.07 with FROST. On average, FROST reduced Dmax and Dmean of OARs by 6.56% and 5.86% of the prescription dose. The manual CLN planning required iterative trial and error runs which is very time consuming, while FROST required minimal human intervention. Conclusions: MRI guided Co-60 therapy needs the output of all sources yet suffers from unintuitive and laborious manual beam selection processes. Automated triplet orientation optimization is shown essential to overcome the difficulty and improves the dosimetry. A novel FMO with regularization provides additional controls over the number of MLC segments

  12. Laser Welding Process Parameters Optimization Using Variable-Fidelity Metamodel and NSGA-II

    Directory of Open Access Journals (Sweden)

    Wang Chaochao

    2017-01-01

    Full Text Available An optimization methodology based on variable-fidelity (VF metamodels and nondominated sorting genetic algorithm II (NSGA-II for laser bead-on-plate welding of stainless steel 316L is presented. The relationships between input process parameters (laser power, welding speed and laser focal position and output responses (weld width and weld depth are constructed by VF metamodels. In VF metamodels, the information from two levels fidelity models are integrated, in which the low-fidelity model (LF is finite element simulation model that is used to capture the general trend of the metamodels, and high-fidelity (HF model which from physical experiments is used to ensure the accuracy of metamodels. The accuracy of the VF metamodel is verified by actual experiments. To slove the optimization problem, NSGA-II is used to search for multi-objective Pareto optimal solutions. The results of verification experiments show that the obtained optimal parameters are effective and reliable.

  13. Lighting Automation - Flying an Earthlike Habit Project

    Science.gov (United States)

    Falker, Jay; Howard, Ricky; Culbert, Christopher; Clark, Toni Anne; Kolomenski, Andrei

    2017-01-01

    Our proposal will enable the development of automated spacecraft habitats for long duration missions. Majority of spacecraft lighting systems employ lamps or zone specific switches and dimmers. Automation is not in the "picture". If we are to build long duration environments, which provide earth-like habitats, minimize crew time, and optimize spacecraft power reserves, innovation in lighting automation is a must. To transform how spacecraft lighting environments are automated, we will provide performance data on a standard lighting communication protocol. We will investigate utilization and application of an industry accepted lighting control protocol, DMX512. We will demonstrate how lighting automation can conserve power, assist with lighting countermeasures, and utilize spatial body tracking. By using DMX512 we will prove the "wheel" does not need to be reinvented in terms of smart lighting and future spacecraft can use a standard lighting protocol to produce an effective, optimized and potentially earthlike habitat.

  14. Prediction Model of Battery State of Charge and Control Parameter Optimization for Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2015-07-01

    Full Text Available This paper presents the construction of a battery state of charge (SOC prediction model and the optimization method of the said model to appropriately control the number of parameters in compliance with the SOC as the battery output objectives. Research Centre for Electrical Power and Mechatronics, Indonesian Institute of Sciences has tested its electric vehicle research prototype on the road, monitoring its voltage, current, temperature, time, vehicle velocity, motor speed, and SOC during the operation. Using this experimental data, the prediction model of battery SOC was built. Stepwise method considering multicollinearity was able to efficiently develops the battery prediction model that describes the multiple control parameters in relation to the characteristic values such as SOC. It was demonstrated that particle swarm optimization (PSO succesfully and efficiently calculated optimal control parameters to optimize evaluation item such as SOC based on the model.

  15. Computer controlled automated assay for comprehensive studies of enzyme kinetic parameters.

    Directory of Open Access Journals (Sweden)

    Felix Bonowski

    Full Text Available Stability and biological activity of proteins is highly dependent on their physicochemical environment. The development of realistic models of biological systems necessitates quantitative information on the response to changes of external conditions like pH, salinity and concentrations of substrates and allosteric modulators. Changes in just a few variable parameters rapidly lead to large numbers of experimental conditions, which go beyond the experimental capacity of most research groups. We implemented a computer-aided experimenting framework ("robot lab assistant" that allows us to parameterize abstract, human-readable descriptions of micro-plate based experiments with variable parameters and execute them on a conventional 8 channel liquid handling robot fitted with a sensitive plate reader. A set of newly developed R-packages translates the instructions into machine commands, executes them, collects the data and processes it without user-interaction. By combining script-driven experimental planning, execution and data-analysis, our system can react to experimental outcomes autonomously, allowing outcome-based iterative experimental strategies. The framework was applied in a response-surface model based iterative optimization of buffer conditions and investigation of substrate, allosteric effector, pH and salt dependent activity profiles of pyruvate kinase (PYK. A diprotic model of enzyme kinetics was used to model the combined effects of changing pH and substrate concentrations. The 8 parameters of the model could be estimated from a single two-hour experiment using nonlinear least-squares regression. The model with the estimated parameters successfully predicted pH and PEP dependence of initial reaction rates, while the PEP concentration dependent shift of optimal pH could only be reproduced with a set of manually tweaked parameters. Differences between model-predictions and experimental observations at low pH suggest additional protonation

  16. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  17. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    Science.gov (United States)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  18. Automated IMRT planning with regional optimization using planning scripts.

    Science.gov (United States)

    Xhaferllari, Ilma; Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff

    2013-01-07

    Intensity-modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time-consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases.

  19. Tools for a simulation supported commissioning of the automation of HVAC plants. Hardware-in-the-loop in building automation; Werkzeuge fuer eine simulationsgestuetzte Inbetriebnahme der Automation von RLT- Anlagen. Hardware-in-the-Loop in der Gebaeudeautomation

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Andreas; Sokollik, Frank [Hochschule Merseburg (Germany). Fachbereich Informatik und Kommunikationssysteme

    2012-07-01

    Hardware-in-the-loop (HiL) is a method for testing and validating technical automation solutions based on virtual processes in a simulation environment. Applied to the automation of the interior air supply systems, preceded commissioning tests of the controller at a simulated system can be performed. These tests can be used for example to find logic errors in the program development, or to adjust the parameters of a controller. The adjustment of the parameters can be performed independent of the seasons by modifying the ambient climatic conditions. The parameters of the plants can be tested under dynamic conditions. The control mode can be visualized by starting up of load conditions at dynamic HVAC components and optimized if necessary. Within BMBF funded projects, a HiL solution was developed in a.NET environment. The coupling of simulation and control takes place via the bus systems CAN and BACnet. The elements of the simulation of air conditioners are implemented object-oriented in the programming language C, and are based on the solution of dynamic mass and energy balances. The features of HIL are implemented in a multi-client architecture. This includes primarily the simulation and communication. Other feature are implemented: import of virtual systems from a CAE system, adjustment of parameters of the simulation using structured sets of parameters, features for a distributed simulation of complex systems in the network, a tool for the dimensioning of controllers, chart and visualization features.

  20. Optimization of machining parameters of hard porcelain on a CNC ...

    African Journals Online (AJOL)

    Optimization of machining parameters of hard porcelain on a CNC machine by Taguchi-and RSM method. ... Journal Home > Vol 10, No 1 (2018) > ... The conduct of experiments was made by employing the Taguchi's L27 Orthogonal array to ...

  1. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Directory of Open Access Journals (Sweden)

    Ronald van 't Klooster

    Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.

  2. Optimization of parameters for the inline-injection system at Brookhaven Accelerator Test Facility

    International Nuclear Information System (INIS)

    Parsa, Z.; Ko, S.K.

    1995-01-01

    We present some of our parameter optimization results utilizing code PARMLEA, for the ATF Inline-Injection System. The new solenoid-Gun-Solenoid -- Drift-Linac Scheme would improve the beam quality needed for FEL and other experiments at ATF as compared to the beam quality of the original design injection system. To optimize the gain in the beam quality we have considered various parameters including the accelerating field gradient on the photoathode, the Solenoid field strengths, separation between the gun and entrance to the linac as well as the (type size) initial charge distributions. The effect of the changes in the parameters on the beam emittance is also given

  3. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    Science.gov (United States)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  4. Fractional Order Controller Designing with Firefly Algorithm and Parameter Optimization for Hydroturbine Governing System

    Directory of Open Access Journals (Sweden)

    Li Junyi

    2015-01-01

    Full Text Available A fractional order PID (FOPID controller, which is suitable for control system designing for being insensitive to the variation in system parameter, is proposed for hydroturbine governing system in the paper. The simultaneous optimization for several parameters of controller, that is, Ki, Kd, Kp, λ, and μ, is done by a recently developed metaheuristic nature-inspired algorithm, namely, the firefly algorithm (FA, for the first time, where the selecting, moving, attractiveness behavior between fireflies and updating of brightness, and decision range are studied in detail to simulate the optimization process. Investigation clearly reveals the advantages of the FOPID controller over the integer controllers in terms of reduced oscillations and settling time. The present work also explores the superiority of FA based optimization technique in finding optimal parameters of the controller. Further, convergence characteristics of the FA are compared with optimum integer order PID (IOPID controller to justify its efficiency. What is more, analysis confirms the robustness of FOPID controller under isolated load operation conditions.

  5. Parameters-tuning of PID controller for automatic voltage regulators using the African buffalo optimization

    Science.gov (United States)

    Mohmad Kahar, Mohd Nizam; Noraziah, A.

    2017-01-01

    In this paper, an attempt is made to apply the African Buffalo Optimization (ABO) to tune the parameters of a PID controller for an effective Automatic Voltage Regulator (AVR). Existing metaheuristic tuning methods have been proven to be quite successful but there were observable areas that need improvements especially in terms of the system’s gain overshoot and steady steady state errors. Using the ABO algorithm where each buffalo location in the herd is a candidate solution to the Proportional-Integral-Derivative parameters was very helpful in addressing these two areas of concern. The encouraging results obtained from the simulation of the PID Controller parameters-tuning using the ABO when compared with the performance of Genetic Algorithm PID (GA-PID), Particle-Swarm Optimization PID (PSO-PID), Ant Colony Optimization PID (ACO-PID), PID, Bacteria-Foraging Optimization PID (BFO-PID) etc makes ABO-PID a good addition to solving PID Controller tuning problems using metaheuristics. PMID:28441390

  6. Optimization of WEDM process parameters using deep cryo-treated Inconel 718 as work material

    Directory of Open Access Journals (Sweden)

    Bijaya Bijeta Nayak

    2016-03-01

    Full Text Available The present work proposes an experimental investigation and optimization of various process parameters during taper cutting of deep cryo-treated Inconel 718 in wire electrical discharge machining process. Taguchi's design of experiment is used to gather information regarding the process with less number of experimental runs considering six input parameters such as part thickness, taper angle, pulse duration, discharge current, wire speed and wire tension. Since traditional Taguchi method fails to optimize multiple performance characteristics, maximum deviation theory is applied to convert multiple performance characteristics into an equivalent single performance characteristic. Due to the complexity and non-linearity involved in this process, good functional relationship with reasonable accuracy between performance characteristics and process parameters is difficult to obtain. To address this issue, the present study proposes artificial neural network (ANN model to determine the relationship between input parameters and performance characteristics. Finally, the process model is optimized to obtain a best parametric combination by a new meta-heuristic approach known as bat algorithm. The results of the proposed algorithm show that the proposed method is an effective tool for simultaneous optimization of performance characteristics during taper cutting in WEDM process.

  7. ASTROS: A multidisciplinary automated structural design tool

    Science.gov (United States)

    Neill, D. J.

    1989-01-01

    ASTROS (Automated Structural Optimization System) is a finite-element-based multidisciplinary structural optimization procedure developed under Air Force sponsorship to perform automated preliminary structural design. The design task is the determination of the structural sizes that provide an optimal structure while satisfying numerous constraints from many disciplines. In addition to its automated design features, ASTROS provides a general transient and frequency response capability, as well as a special feature to perform a transient analysis of a vehicle subjected to a nuclear blast. The motivation for the development of a single multidisciplinary design tool is that such a tool can provide improved structural designs in less time than is currently needed. The role of such a tool is even more apparent as modern materials come into widespread use. Balancing conflicting requirements for the structure's strength and stiffness while exploiting the benefits of material anisotropy is perhaps an impossible task without assistance from an automated design tool. Finally, the use of a single tool can bring the design task into better focus among design team members, thereby improving their insight into the overall task.

  8. WARACS: Wrappers to Automate the Reconstruction of Ancestral Character States1

    Science.gov (United States)

    Gruenstaeudl, Michael

    2016-01-01

    Premise of the study: Reconstructions of ancestral character states are among the most widely used analyses for evaluating the morphological, cytological, or ecological evolution of an organismic lineage. The software application Mesquite remains the most popular application for such reconstructions among plant scientists, even though its support for automating complex analyses is limited. A software tool is needed that automates the reconstruction and visualization of ancestral character states with Mesquite and similar applications. Methods and Results: A set of command line–based Python scripts was developed that (a) communicates standardized input to and output from the software applications Mesquite, BayesTraits, and TreeGraph2; (b) automates the process of ancestral character state reconstruction; and (c) facilitates the visualization of reconstruction results. Conclusions: WARACS provides a simple tool that streamlines the reconstruction and visualization of ancestral character states over a wide array of parameters, including tree distribution, character state, and optimality criterion. PMID:26949580

  9. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  10. GAUFRE: A tool for an automated determination of atmospheric parameters from spectroscopy

    Directory of Open Access Journals (Sweden)

    Fossati L.

    2013-03-01

    Full Text Available We present an automated tool for measuring atmospheric parameters (Teff, log g, [Fe/H] for F-G-K dwarf and giant stars. The tool, called GAUFRE, is composed of several routines written in C++: GAUFRE-RV measures radial velocity from spectra via cross-correlation against a synthetic template, GAUFRE-EW measures atmospheric parameters through the classic line-by-line technique and GAUFRE-CHI2 performs a ��2 fitting to a library of synthetic spectra. A set of F-G-K stars extensively studied in the literature were used as a benchmark for the program: their high signal-to-noise and high resolution spectra were analyzed by using GAUFRE and results were compared with those present in literature. The tool is also implemented in order to perform the spectral analysis after fixing the surface gravity (log g to the accurate value provided by asteroseismology. A set of CoRoT stars, belonging to LRc01 and LRa01 fields was used for first testing the performances and the behavior of the program when using the seismic log g.

  11. Optimization of vibratory welding process parameters using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Pravin Kumar; Kumar, S. Deepak; Patel, D.; Prasad, S. B. [National Institute of Technology Jamshedpur, Jharkhand (India)

    2017-05-15

    The current investigation was carried out to study the effect of vibratory welding technique on mechanical properties of 6 mm thick butt welded mild steel plates. A new concept of vibratory welding technique has been designed and developed which is capable to transfer vibrations, having resonance frequency of 300 Hz, into the molten weld pool before it solidifies during the Shielded metal arc welding (SMAW) process. The important process parameters of vibratory welding technique namely welding current, welding speed and frequency of the vibrations induced in molten weld pool were optimized using Taguchi’s analysis and Response surface methodology (RSM). The effect of process parameters on tensile strength and hardness were evaluated using optimization techniques. Applying RSM, the effect of vibratory welding parameters on tensile strength and hardness were obtained through two separate regression equations. Results showed that, the most influencing factor for the desired tensile strength and hardness is frequency at its resonance value, i.e. 300 Hz. The micro-hardness and microstructures of the vibratory welded joints were studied in detail and compared with those of conventional SMAW joints. Comparatively, uniform and fine grain structure has been found in vibratory welded joints.

  12. An interactive and flexible approach to stamping design and optimization

    International Nuclear Information System (INIS)

    Roy, Subir; Kunju, Ravi; Kirby, David

    2004-01-01

    This paper describes an efficient method that integrates finite element analysis (FEA), mesh morphing and response surface based optimization in order to implement an automated and flexible software tool to optimize stamping tool and process design. For FEA, a robust and extremely fast inverse solver is chosen. For morphing, a state of the art mesh morpher that interactively generates shape variables for optimization studies is used. The optimization algorithm utilized in this study enables a global search for a multitude of parameters and is highly flexible with regards to the choice of objective functions. A quality function that minimizes formability defects resulting from stretching and compression is implemented

  13. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    Science.gov (United States)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the

  14. A Particle Swarm Optimization of Natural Ventilation Parameters in a Greenhouse with Continuous Roof Vents

    Directory of Open Access Journals (Sweden)

    Abdelhafid HASNI

    2009-03-01

    Full Text Available Although natural ventilation plays an important role in the affecting greenhouse climate, as defined by temperature, humidity and CO2 concentration, particularly in Mediterranean countries, little information and data are presently available on full-scale greenhouse ventilation mechanisms. In this paper, we present a new method for selecting the parameters based on a particle swarm optimization (PSO algorithm which optimize the choice of parameters by minimizing a cost function. The simulator was based on a published model with some minor modifications as we were interested in the parameter of ventilation. The function is defined by a reduced model that could be used to simulate and predict the greenhouse environment, as well as the tuning methods to compute their parameters. This study focuses on the dynamic behavior of the inside air temperature and humidity during ventilation. Our approach is validated by comparison with some experimental results. Various experimental techniques were used to make full-scale measurements of the air exchange rate in a 400 m2 plastic greenhouse. The model which we propose based on natural ventilation parameters optimized by a particle swarm optimization was compared with the measurements results.

  15. Optimization of the dressing parameters in cylindrical grinding based on a generalized utility function

    Science.gov (United States)

    Aleksandrova, Irina

    2016-01-01

    The existing studies, concerning the dressing process, focus on the major influence of the dressing conditions on the grinding response variables. However, the choice of the dressing conditions is often made, based on the experience of the qualified staff or using data from reference books. The optimal dressing parameters, which are only valid for the particular methods and dressing and grinding conditions, are also used. The paper presents a methodology for optimization of the dressing parameters in cylindrical grinding. The generalized utility function has been chosen as an optimization parameter. It is a complex indicator determining the economic, dynamic and manufacturing characteristics of the grinding process. The developed methodology is implemented for the dressing of aluminium oxide grinding wheels by using experimental diamond roller dressers with different grit sizes made of medium- and high-strength synthetic diamonds type ??32 and ??80. To solve the optimization problem, a model of the generalized utility function is created which reflects the complex impact of dressing parameters. The model is built based on the results from the conducted complex study and modeling of the grinding wheel lifetime, cutting ability, production rate and cutting forces during grinding. They are closely related to the dressing conditions (dressing speed ratio, radial in-feed of the diamond roller dresser and dress-out time), the diamond roller dresser grit size/grinding wheel grit size ratio, the type of synthetic diamonds and the direction of dressing. Some dressing parameters are determined for which the generalized utility function has a maximum and which guarantee an optimum combination of the following: the lifetime and cutting ability of the abrasive wheels, the tangential cutting force magnitude and the production rate of the grinding process. The results obtained prove the possibility of control and optimization of grinding by selecting particular dressing

  16. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  17. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    Science.gov (United States)

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and

  18. Influence factor on automated synthesis yield of 3'-deoxy-3'-[18F] fluorothymidine

    International Nuclear Information System (INIS)

    Zhang Jinming; Tian Jiahe; Liu Changbin; Liu Jian; Luo Zhigang

    2009-01-01

    3'-deoxy-3'-[ 18 F] fluorothymidine ( 18 F-FLT) was prepared from N-BOC precursor to improve the synthesis yield, chemical purity and radiochemical purity of 18 F-FLT by home-made automated synthesis module. The results showed that residual water in synthesis system and the amount of precursor could affect the synthesis yield dramatically. The more the amount of precursor, the higher the synthesis yield of N-BOC. The residual water can decrease the synthesis yield. In the presence of excess base, the precursor was consumed by elimination before substitution was completed. The precursor to base was optimal in 1 to 1. The balance of semi-preparatiove HPLC Column can affect purified the final 18 F-FLT product. The chemical purity of 18 F-FLT could be decreased with 8% EtOH as mobile phase in semi-preparatiove HPLC. The high chemical purity, radiochemical purity and synthesis yield could be obtained by optimized the parameter of synthesis with home-made automated synthesis module. (authors)

  19. Design Optimization of Internal Flow Devices

    DEFF Research Database (Denmark)

    Madsen, Jens Ingemann

    The power of computational fluid dynamics is boosted through the use of automated design optimization methodologies. The thesis considers both derivative-based search optimization and the use of response surface methodologies.......The power of computational fluid dynamics is boosted through the use of automated design optimization methodologies. The thesis considers both derivative-based search optimization and the use of response surface methodologies....

  20. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  1. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    Science.gov (United States)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations

  2. Multi-parameter optimization of a nanomagnetic system for spintronic applications

    International Nuclear Information System (INIS)

    Morales Meza, Mishel; Zubieta Rico, Pablo F.; Horley, Paul P.; Sukhov, Alexander; Vieira, Vítor R.

    2014-01-01

    Magnetic properties of nano-particles feature many interesting physical phenomena that are essentially important for the creation of a new generation of spin-electronic devices. The magnetic stability of the nano-particles can be improved by formation of ordered particle arrays, which should be optimized over several parameters. Here we report successful optimization regarding inter-particle distance and applied field frequency allowing to obtain about three-times reduction of coercivity of a particle array compared to that of a single particle, which opens new perspectives for development of new spintronic devices

  3. Multi-parameter optimization of a nanomagnetic system for spintronic applications

    Energy Technology Data Exchange (ETDEWEB)

    Morales Meza, Mishel [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Zubieta Rico, Pablo F. [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Centro de Investigación y de Estudios Avanzados del IPN (CINVESTAV) Querétaro, Libramiento Norponiente 2000, Fracc. Real de Juriquilla, 76230 Querétaro (Mexico); Horley, Paul P., E-mail: paul.horley@cimav.edu.mx [Centro de Investigación en Materiales Avanzados, S.C. (CIMAV), Chihuahua/Monterrey, 120 Avenida Miguel de Cervantes, 31109 Chihuahua (Mexico); Sukhov, Alexander [Institut für Physik, Martin-Luther Universität Halle-Wittenberg, 06120 Halle (Saale) (Germany); Vieira, Vítor R. [Centro de Física das Interacções Fundamentais (CFIF), Instituto Superior Técnico, Universidade Técnica de Lisboa, Avenida Rovisco Pais, 1049-001 Lisbon (Portugal)

    2014-11-15

    Magnetic properties of nano-particles feature many interesting physical phenomena that are essentially important for the creation of a new generation of spin-electronic devices. The magnetic stability of the nano-particles can be improved by formation of ordered particle arrays, which should be optimized over several parameters. Here we report successful optimization regarding inter-particle distance and applied field frequency allowing to obtain about three-times reduction of coercivity of a particle array compared to that of a single particle, which opens new perspectives for development of new spintronic devices.

  4. Optimization of processing parameters of amaranth grits before grinding into flour

    Science.gov (United States)

    Zharkova, I. M.; Safonova, Yu A.; Slepokurova, Yu I.

    2018-05-01

    There are the results of experimental studies about the influence of infrared treatment (IR processing) parameters of the amaranth grits before their grinding into flour on the composition and properties of the received product. Using the method called as regressionfactor analysis, the optimal conditions of the thermal processing to the amaranth grits were obtained: the belt speed of the conveyor – 0.049 m/s; temperature of amaranth grits in the tempering silo – 65.4 °C the thickness of the layer of amaranth grits on the belt is 3 - 5 mm and the lamp power is 69.2 kW/m2. The conducted researches confirmed that thermal effect to the amaranth grains in the IR setting allows getting flour with a smaller size of starch grains, with the increased water-holding ability, and with a changed value of its glycemic index. Mathematical processing of experimental data allowed establishing the dependence of the structural and technological characteristics of the amaranth flour on the IR processing parameters of amaranth grits. The obtained results are quite consistent with the experimental ones that proves the effectiveness of optimization based on mathematical planning of the experiment to determine the influence of heat treatment optimal parameters of the amaranth grits on the functional and technological properties of the flour received from it.

  5. Applications of the Automated SMAC Modal Parameter Extraction Package

    International Nuclear Information System (INIS)

    MAYES, RANDALL L.; DORRELL, LARRY R.; KLENKE, SCOTT E.

    1999-01-01

    An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. The new capabilities of the automated version are demonstrated on test data from a complex shell/payload system. Examples of extractions from impact and shaker data are shown. The automated algorithm extracts 30 to 50 modes in the bandwidth from each column of the frequency response function matrix. Examples of the synthesized Mode Indicator Functions (MIFs) compared with the actual MIFs show the accuracy of the technique. A data set for one input and 170 accelerometer outputs can typically be reduced in an hour. Application to a test with some complex modes is also demonstrated

  6. Nonparametric variational optimization of reaction coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk [Astbury Center for Structural Molecular Biology, Faculty of Biological Sciences, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2015-11-14

    State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such an approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.

  7. Dynamic Pressure Gradient Model of Axial Piston Pump and Parameters Optimization

    Directory of Open Access Journals (Sweden)

    Shi Jian

    2014-01-01

    Full Text Available The unsteady pressure gradient can cause flow noise in prepressure rising of piston pump, and the fluid shock comes up due to the large pressure difference of the piston chamber and discharge port in valve plate. The flow fluctuation control is the optimization objective in previous study, which cannot ensure the steady pressure gradient. Our study is to stabilize the pressure gradient in prepressure rising and control the pressure of piston chamber approaching to the pressure in discharge port after prepressure rising. The models for nonoil shock and dynamic pressure of piston chamber in prepressure rising are established. The parameters of prepressure rising angle, cross angle, wrap angle of V-groove, vertex angle of V-groove, and opening angle of V-groove were optimized, based on which the pressure of the piston chamber approached the pressure in discharge port after prepressure rising, and the pressure gradient is more steady compared to the original parameters. The max pressure gradient decreased by 70.8% and the flow fluctuation declined by 21.4%, which showed the effectivness of optimization.

  8. Optimal relations of the parameters ensuring safety during reactor start-up

    International Nuclear Information System (INIS)

    Yurkevich, G.P.

    2004-01-01

    Procedure and equations for the determination of optimal ratio between parameters allowing safe removal of reactor in critical state are suggested. Initial pulse frequency of pulsed start-up channel and power of neutron source are decreased by reduced rate of changing reactivity during automatic start-up, disposition of pulsed neutron detector in the range with neutron flux density to 5·10 12 s -1 cm -2 at standard power, separate signal of period for the use in chains of automatic start-up and emergency protection, reduction of pulses frequency of the start-up channel (the frequency is equal to 4000 c -1 ). Procedure and equations for the determination of optimal parameters are effected with the account of statistic character of pulsed detector frequency and false outlet signal [ru

  9. Optimal allocation of sensors for state estimation of distributed parameter systems

    International Nuclear Information System (INIS)

    Sunahara, Yoshifumi; Ohsumi, Akira; Mogami, Yoshio.

    1978-01-01

    The purpose of this paper is to present a method for finding the optimal allocation of sensors for state estimation of linear distributed parameter systems. This method is based on the criterion that the error covariance associated with the state estimate becomes minimal with respect to the allocation of the sensors. A theorem is established, giving the sufficient condition for optimizing the allocation of sensors to make minimal the error covariance approximated by a modal expansion. The remainder of this paper is devoted to illustrate important phases of the general theory of the optimal measurement allocation problem. To do this, several examples are demonstrated, including extensive discussions on the mutual relation between the optimal allocation and the dynamics of sensors. (author)

  10. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    Science.gov (United States)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  11. Multiscale analysis of the correlation of processing parameters on viscidity of composites fabricated by automated fiber placement

    Science.gov (United States)

    Han, Zhenyu; Sun, Shouzheng; Fu, Yunzhong; Fu, Hongya

    2017-10-01

    Viscidity is an important physical indicator for assessing fluidity of resin that is beneficial to contact resin with the fibers effectively and reduce manufacturing defects during automated fiber placement (AFP) process. However, the effect of processing parameters on viscidity evolution is rarely studied during AFP process. In this paper, viscidities under different scales are analyzed based on multi-scale analysis method. Firstly, viscous dissipation energy (VDE) within meso-unit under different processing parameters is assessed by using finite element method (FEM). According to multi-scale energy transfer model, meso-unit energy is used as the boundary condition for microscopic analysis. Furthermore, molecular structure of micro-system is built by molecular dynamics (MD) method. And viscosity curves are then obtained by integrating stress autocorrelation function (SACF) with time. Finally, the correlation characteristics of processing parameters to viscosity are revealed by using gray relational analysis method (GRAM). A group of processing parameters is found out to achieve the stability of viscosity and better fluidity of resin.

  12. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  13. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    Science.gov (United States)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  14. Homogeneous Gaussian Profile P+-Type Emitters: Updated Parameters and Metal-Grid Optimization

    Directory of Open Access Journals (Sweden)

    M. Cid

    2002-10-01

    Full Text Available P+-type emitters were optimized keeping the base parameters constant. Updated internal parameters were considered. The surface recombination velocity was considered variable with the surface doping level. Passivated homogeneous emitters were found to have low emitter recombination density and high collection efficiency. A complete structure p+nn+ was analyzed, taking into account optimized shadowing and metal-contacted factors for laboratory cells as function of the surface doping level and the emitter thickness. The base parameters were kept constant to make the emitter characteristics evident. The most efficient P+-type passivated homogeneous emitters, provide efficiencies around 21% for a wide range of emitter sheet resistivity (50 -- 500 omega/ with the surface doping levels Ns=1×10(19 cm-3 and 5×10(19 cm-3. The output electrical parameters were evaluated considering the recently proposed value n i=9.65×10(9 (cm-3. A non-significant increase of 0.1% in the efficiency was obtained, validating all the conclusions obtained in this work, considering n i=1×10(10 cm-3.

  15. A new framework for analysing automated acoustic species-detection data: occupancy estimation and optimization of recordings post-processing

    Science.gov (United States)

    Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.

    2018-01-01

    The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.

  16. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    Science.gov (United States)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  17. Optimization of reserve lithium thionyl chloride battery electrochemical design parameters

    Energy Technology Data Exchange (ETDEWEB)

    Doddapaneni, N.; Godshall, N.A.

    1987-01-01

    The performance of Reserve Lithium Thionyl Chloride (RLTC) batteries was optimized by conducting a parametric study of seven electrochemical parameters: electrode compression, carbon thickness, presence of catalyst, temperature, electrode limitation, discharge rate, and electrolyte acidity. Increasing electrode compression (from 0 to 15%) improved battery performance significantly (10% greater carbon capacity density). Although thinner carbon cathodes yielded less absolute capacity than did thicker cathodes, they did so with considerably higher volume efficiencies. The effect of these parameters, and their synergistic interactions, on electrochemical cell peformance is illustrated. 5 refs., 9 figs., 3 tabs.

  18. Optimization of reserve lithium thionyl chloride battery electrochemical design parameters

    Science.gov (United States)

    Doddapaneni, N.; Godshall, N. A.

    The performance of Reserve Lithium Thionyl Chloride (RLTC) batteries was optimized by conducting a parametric study of seven electrochemical parameters: electrode compression, carbon thickness, presence of catalyst, temperature, electrode limitation, discharge rate, and electrolyte acidity. Increasing electrode compression (from 0 to 15 percent) improved battery performance significantly (10 percent greater carbon capacity density). Although thinner carbon cathodes yielded less absolute capacity than did thicker cathodes, they did so with considerably higher volume efficiencies. The effect of these parameters, and their synergistic interactions, on electrochemical cell performance is illustrated.

  19. The contaminant analysis automation robot implementation for the automated laboratory

    International Nuclear Information System (INIS)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-01-01

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLM when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation

  20. Automation of analytical systems in power cycles

    International Nuclear Information System (INIS)

    Staub Lukas

    2008-01-01

    'Automation' is a widely used term in instrumentation and is often applied to signal exchange, PLC and SCADA systems. Common use, however, does not necessarily described autonomous operation of analytical devices. We define an automated analytical system as a black box with an input (sample) and an output (measured value). In addition we need dedicated status lines for assessing the validities of the input for our black box and the output for subsequent systems. We will discuss input parameters, automated analytical processes and output parameters. Further considerations will be given to signal exchange and integration into the operating routine of a power plant. Local control loops (chemical dosing) and the automation of sampling systems are not discussed here. (author)

  1. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilcox, Ian Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandoval, Andrew J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reza, Shahed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction and portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.

  2. Poster - Thur Eve - 65: Optimization of an automatic image contouring system for radiation therapy.

    Science.gov (United States)

    Hamilton, T; Nedialkov, N; Wierzbicki, M

    2012-07-01

    Intensity modulated radiation therapy (IMRT) is an advanced technique used to concentrate the prescribed dose in the tumour while minimizing exposure to healthy tissues. Success in IMRT is greatly dependent upon the localization of the target volume and normal tissue, thus accurate contouring is crucial. In this paper, we describe an automated atlas-based image contouring system and our approach for improving the system by performing a full-scale optimization of registration parameters using high-performance computing. To achieve this, we use manually pre-contoured CT images of ten head and neck patients. For any parameter set, each patient data is registered with the remaining patients. Accuracy of the resulting contours is determined automatically by comparing their overlap with manually defined targets using Dice's similarity coefficient (DSC). This allows us to compare all permutations of the image registration parameter sets and input data to investigate their impact on final contour accuracy. Investigating the parameter space required 27,000 image registrations and 216,000 DSC computations. To perform these registrations we introduced a large cluster of high-performance computers and developed a parallel testing harness. The metrics collected from the tests show a wide range of performance, indicating that parameter selection is crucial in our contouring system. By selecting an optimized parameter set, we increased the mean overlap of the automatically contoured regions of interest by 50% and reduced registration time by 50% compared to the original parameters. Our findings illustrate that full-scale optimization is an effective method for improving the performance of the automated image contouring system. © 2012 American Association of Physicists in Medicine.

  3. Study radiolabeling of urea-based PSMA inhibitor with 68-Galliu: Comparative evaluation of automated and not automated methods

    International Nuclear Information System (INIS)

    Alcarde, Lais Fernanda

    2016-01-01

    The methods for clinical diagnosis of prostate cancer include rectal examination and the dosage of the prostatic specific antigen (PSA). However, the PSA level is elevated in about 20 to 30% of cases related to benign pathologies, resulting in false positives and leading patients to unnecessary biopsies. The prostate specific membrane antigen (PSMA), in contrast, is over expressed in prostate cancer and founded at low levels in healthy organs. As a result, it stimulated the development of small molecule inhibitors of PSMA, which carry imaging agents to the tumor and are not affected by their microvasculature. Recent studies suggest that the HBED-CC chelator intrinsically contributes to the binding of the PSMA inhibitor peptide based on urea (Glu-urea-Lys) to the pharmacophore group. This work describes the optimization of radiolabeling conditions of PSMA-HBED-CC with "6"8Ga, using automated system (synthesis module) and no automated method, seeking to establish an appropriate condition to prepare this new radiopharmaceutical, with emphasis on the labeling yield and radiochemical purity of the product. It also aimed to evaluate the stability of the radiolabeled peptide in transport conditions and study the biological distribution of the radiopharmaceutical in healthy mice. The study of radiolabeling parameters enabled to define a non-automated method which resulted in high radiochemical purity (> 95 %) without the need for purification of the labeled peptide. The automated method has been adapted, using a module of synthesis and software already available at IPEN, and also resulted in high synthetic yield (≥ 90%) specially when compared with those described in the literature, with the associated benefit of greater control of the production process in compliance with Good Manufacturing Practices. The study of radiolabeling parameters afforded the PSMA-HBED-CC-"6"8Ga with higher specific activity than observed in published clinical studies (≥ 140,0 GBq

  4. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    Science.gov (United States)

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the

  5. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  6. Optimizing a Drone Network to Deliver Automated External Defibrillators.

    Science.gov (United States)

    Boutilier, Justin J; Brooks, Steven C; Janmohamed, Alyf; Byers, Adam; Buick, Jason E; Zhan, Cathy; Schoellig, Angela P; Cheskes, Sheldon; Morrison, Laurie J; Chan, Timothy C Y

    2017-06-20

    Public access defibrillation programs can improve survival after out-of-hospital cardiac arrest, but automated external defibrillators (AEDs) are rarely available for bystander use at the scene. Drones are an emerging technology that can deliver an AED to the scene of an out-of-hospital cardiac arrest for bystander use. We hypothesize that a drone network designed with the aid of a mathematical model combining both optimization and queuing can reduce the time to AED arrival. We applied our model to 53 702 out-of-hospital cardiac arrests that occurred in the 8 regions of the Toronto Regional RescuNET between January 1, 2006, and December 31, 2014. Our primary analysis quantified the drone network size required to deliver an AED 1, 2, or 3 minutes faster than historical median 911 response times for each region independently. A secondary analysis quantified the reduction in drone resources required if RescuNET was treated as a large coordinated region. The region-specific analysis determined that 81 bases and 100 drones would be required to deliver an AED ahead of median 911 response times by 3 minutes. In the most urban region, the 90th percentile of the AED arrival time was reduced by 6 minutes and 43 seconds relative to historical 911 response times in the region. In the most rural region, the 90th percentile was reduced by 10 minutes and 34 seconds. A single coordinated drone network across all regions required 39.5% fewer bases and 30.0% fewer drones to achieve similar AED delivery times. An optimized drone network designed with the aid of a novel mathematical model can substantially reduce the AED delivery time to an out-of-hospital cardiac arrest event. © 2017 American Heart Association, Inc.

  7. Performance Evaluation and Parameter Optimization of SoftCast Wireless Video Broadcast

    Directory of Open Access Journals (Sweden)

    Dongxue Yang

    2015-08-01

    Full Text Available Wireless video broadcast plays an imp ortant role in multimedia communication with the emergence of mobile video applications. However, conventional video broadcast designs suffer from a cliff effect due to separated source and channel encoding. The newly prop osed SoftCast scheme employs a cross-layer design, whose reconstructed video quality is prop ortional to the channel condition. In this pap er, we provide the p erformance evaluation and the parameter optimization of the SoftCast system. Optimization principles on parameter selection are suggested to obtain a b etter video quality, o ccupy less bandwidth and/or utilize lower complexity. In addition, we compare SoftCast with H.264 in the LTE EPA scenario. The simulation results show that SoftCast provides a b etter p erformance in the scalability to channel conditions and the robustness to packet losses.

  8. Multi Objective Optimization of Weld Parameters of Boiler Steel Using Fuzzy Based Desirability Function

    Directory of Open Access Journals (Sweden)

    M. Satheesh

    2014-01-01

    Full Text Available The high pressure differential across the wall of pressure vessels is potentially dangerous and has caused many fatal accidents in the history of their development and operation. For this reason the structural integrity of weldments is critical to the performance of pressure vessels. In recent years much research has been conducted to the study of variations in welding parameters and consumables on the mechanical properties of pressure vessel steel weldments to optimize weld integrity and ensure pressure vessels are safe. The quality of weld is a very important working aspect for the manufacturing and construction industries. Because of high quality and reliability, Submerged Arc Welding (SAW is one of the chief metal joining processes employed in industry. This paper addresses the application of desirability function approach combined with fuzzy logic analysis to optimize the multiple quality characteristics (bead reinforcement, bead width, bead penetration and dilution of submerged arc welding process parameters of SA 516 Grade 70 steels(boiler steel. Experiments were conducted using Taguchi’s L27 orthogonal array with varying the weld parameters of welding current, arc voltage, welding speed and electrode stickout. By analyzing the response table and response graph of the fuzzy reasoning grade, optimal parameters were obtained. Solutions from this method can be useful for pressure vessel manufacturers and operators to search an optimal solution of welding condition.

  9. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    Science.gov (United States)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  10. Automated analysis of autoradiographic imagery

    International Nuclear Information System (INIS)

    Bisignani, W.T.; Greenhouse, S.C.

    1975-01-01

    A research programme is described which has as its objective the automated characterization of neurological tissue regions from autoradiographs by utilizing hybrid-resolution image processing techniques. An experimental system is discussed which includes raw imagery, scanning an digitizing equipments, feature-extraction algorithms, and regional characterization techniques. The parameters extracted by these algorithms are presented as well as the regional characteristics which are obtained by operating on the parameters with statistical sampling techniques. An approach is presented for validating the techniques and initial experimental results are obtained from an anlysis of an autoradiograph of a region of the hypothalamus. An extension of these automated techniques to other biomedical research areas is discussed as well as the implications of applying automated techniques to biomedical research problems. (author)

  11. Multi-parameter geometrical scaledown study for energy optimization of MTJ and related spintronics nanodevices

    Science.gov (United States)

    Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.

    The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.

  12. Millimeter-wave small-signal modeling with optimizing sensitive-parameters for metamorphic high electron mobility transistors

    International Nuclear Information System (INIS)

    Moon, S-W; Baek, Y-H; Han, M; Rhee, J-K; Kim, S-D; Oh, J-H

    2010-01-01

    In this paper, we present a simple and reliable technique for determining the small-signal equivalent circuit model parameters of the 0.1 µm metamorphic high electron mobility transistors (MHEMTs) in a millimeter-wave frequency range. The initial eight extrinsic parameters of the MHEMT are extracted using two S-parameter (scattering parameter) sets measured under the pinched-off and zero-biased cold field-effect transistor conditions by avoiding the forward gate biasing. Furthermore, highly calibration-sensitive values of the R s , L s and C pd are optimized by using a gradient optimization method to improve the modeling accuracy. The accuracy enhancement of this procedure is successfully verified with an excellent correlation between the measured and calculated S-parameters up to 65 GHz

  13. Automation of extrusion of porous cable products based on a digital controller

    Science.gov (United States)

    Chostkovskii, B. K.; Mitroshin, V. N.

    2017-07-01

    This paper presents a new approach to designing an automated system for monitoring and controlling the process of applying porous insulation material on a conductive cable core, which is based on using structurally and parametrically optimized digital controllers of an arbitrary order instead of calculating typical PID controllers using known methods. The digital controller is clocked by signals from the clock length sensor of a measuring wheel, instead of a timer signal, and this provides the robust properties of the system with respect to the changing insulation speed. Digital controller parameters are tuned to provide the operating parameters of the manufactured cable using a simulation model of stochastic extrusion and are minimized by moving a regular simplex in the parameter space of the tuned controller.

  14. Development of a parameter optimization technique for the design of automatic control systems

    Science.gov (United States)

    Whitaker, P. H.

    1977-01-01

    Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.

  15. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    Science.gov (United States)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  16. Automation model of sewerage rehabilitation planning.

    Science.gov (United States)

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan.

  17. Automated MAD and MIR structure solution

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    A fully automated procedure for solving MIR and MAD structures has been developed using a scoring scheme to convert the structure-solution process into an optimization problem. Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations

  18. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  19. Optimization of basic parameters of cyclic operation of underground gas storages

    Directory of Open Access Journals (Sweden)

    Віктор Олександрович Заєць

    2015-04-01

    Full Text Available The problem of optimization of process parameters of cyclic operation of underground gas storages in gas mode is determined in the article. The target function is defined, expressing necessary capacity of compressor station for gas injection in the storage. Its minimization will find the necessary technological parameters, such as flow and reservoir pressure change over time. Limitations and target function are reduced to a linear form. Solution of problems is made by the simplex method

  20. Optimization of process parameters for a quasi-continuous tablet coating system using design of experiments.

    Science.gov (United States)

    Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah

    2011-03-01

    The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists

  1. Parameter evaluation and fully-automated radiosynthesis of [11C]harmine for imaging of MAO-A for clinical trials

    International Nuclear Information System (INIS)

    Philippe, C.; Zeilinger, M.; Mitterhauser, M.; Dumanic, M.; Lanzenberger, R.; Hacker, M.; Wadsak, W.

    2015-01-01

    The aim of the present study was the evaluation and automation of the radiosynthesis of [ 11 C]harmine for clinical trials. The following parameters have been investigated: amount of base, precursor concentration, solvent, reaction temperature and time. The optimum reaction conditions were determined to be 2–3 mg/mL precursor activated with 1 eq. 5 M NaOH in DMSO, 80 °C reaction temperature and 2 min reaction time. Under these conditions 6.1±1 GBq (51.0±11% based on [ 11 C]CH 3 I, corrected for decay) of [ 11 C]harmine (n=72) were obtained. The specific activity was 101.32±28.2 GBq/µmol (at EOS). All quality control parameters were in accordance with the standards for parenteral human application. Due to its reliability and high yields, this fully-automated synthesis method can be used as routine set-up. - Highlights: • Preparation of [ 11 C]harmine on a commercially available synthesizer for the routine application. • High reliability: only 4 out of 72 failed syntheses; 5% due to technical problems. • High yields: 6.1±1 GBq overall yield (EOS). • High specific activities: 101.32±28.2 GBq/µmol

  2. Optimization of IBF parameters based on adaptive tool-path algorithm

    Science.gov (United States)

    Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi

    2018-03-01

    As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.

  3. Optimal CT scanning parameters for commonly used tumor ablation applicators

    International Nuclear Information System (INIS)

    Eltorai, Adam E.M.; Baird, Grayson L.; Monu, Nicholas; Wolf, Farrah; Seidler, Michael; Collins, Scott; Kim, Jeomsoon; Dupuy, Damian E.

    2017-01-01

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  4. Optimal CT scanning parameters for commonly used tumor ablation applicators

    Energy Technology Data Exchange (ETDEWEB)

    Eltorai, Adam E.M. [Warren Alpert Medical School of Brown University (United States); Baird, Grayson L. [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Lifespan Biostatistics Core (United States); Rhode Island Hospital (United States); Monu, Nicholas; Wolf, Farrah; Seidler, Michael [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States); Collins, Scott [Department of Diagnostic Imaging (United States); Rhode Island Hospital (United States); Kim, Jeomsoon [Department of Medical Physics (United States); Rhode Island Hospital (United States); Dupuy, Damian E., E-mail: ddupuy@comcast.net [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States)

    2017-04-15

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  5. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    Science.gov (United States)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  6. Determination of radial profile of ICF hot spot's state by multi-objective parameters optimization

    International Nuclear Information System (INIS)

    Dong Jianjun; Deng Bo; Cao Zhurong; Ding Yongkun; Jiang Shaoen

    2014-01-01

    A method using multi-objective parameters optimization is presented to determine the radial profile of hot spot temperature and density. And a parameter space which contain five variables: the temperatures at center and the interface of fuel and remain ablator, the maximum model density of remain ablator, the mass ratio of remain ablator to initial ablator and the position of interface between fuel and the remain ablator, is used to described the hot spot radial temperature and density. Two objective functions are set as the variances of normalized intensity profile from experiment X-ray images and the theory calculation. Another objective function is set as the variance of experiment average temperature of hot spot and the average temperature calculated by theoretical model. The optimized parameters are obtained by multi-objective genetic algorithm searching for the five dimension parameter space, thereby the optimized radial temperature and density profiles can be determined. The radial temperature and density profiles of hot spot by experiment data measured by KB microscope cooperating with X-ray film are presented. It is observed that the temperature profile is strongly correlated to the objective functions. (authors)

  7. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  8. Selection of the optimal Box-Cox transformation parameter for modelling and forecasting age-specific fertility

    OpenAIRE

    Shang, Han Lin

    2015-01-01

    The Box-Cox transformation can sometimes yield noticeable improvements in model simplicity, variance homogeneity and precision of estimation, such as in modelling and forecasting age-specific fertility. Despite its importance, there have been few studies focusing on the optimal selection of Box-Cox transformation parameters in demographic forecasting. A simple method is proposed for selecting the optimal Box-Cox transformation parameter, along with an algorithm based on an in-sample forecast ...

  9. Parameter Identification of Static Friction Based on An Optimal Exciting Trajectory

    Science.gov (United States)

    Tu, X.; Zhao, P.; Zhou, Y. F.

    2017-12-01

    In this paper, we focus on how to improve the identification efficiency of friction parameters in a robot joint. First, the static friction model that has only linear dependencies with respect to their parameters is adopted so that the servomotor dynamics can be linearized. In this case, the traditional exciting trajectory based on Fourier series is modified by replacing the constant term with quintic polynomial to ensure the boundary continuity of speed and acceleration. Then, the Fourier-related parameters are optimized by genetic algorithm(GA) in which the condition number of regression matrix is set as the fitness function. At last, compared with the constant-velocity tracking experiment, the friction parameters from the exciting trajectory experiment has the similar result with the advantage of time reduction.

  10. PI Stabilization for Congestion Control of AQM Routers with Tuning Parameter Optimization

    Directory of Open Access Journals (Sweden)

    S. Chebli

    2016-09-01

    Full Text Available In this paper, we consider the problem of stabilizing network using a new proportional- integral (PI based congestion controller in active queue management (AQM router; with appropriate model approximation in the first order delay systems, we seek a stability region of the controller by using the Hermite- Biehler theorem, which isapplicable to quasipolynomials. A Genetic Algorithm technique is employed to derive optimal or near optimal PI controller parameters.

  11. Optimization of the parameters of power sources excited by β-radiation

    Energy Technology Data Exchange (ETDEWEB)

    Bulyarskiy, S. V., E-mail: bulyar2954@mail.ru; Lakalin, A. V. [Russian Academy of Sciences, Institute of Nanotechnology of Microelectronics (Russian Federation); Abanin, I. E.; Amelichev, V. V. [Technological Center (Russian Federation); Svetuhin, V. V. [Ulyanovsk State University (Russian Federation)

    2017-01-15

    The experimental results and calculations of the efficiency of the energy conversion of Ni-63 β-radiation sources to electricity using silicon p–i–n diodes are presented. All calculations are performed taking into account the energy distribution of β-electrons. An expression for the converter open-circuit voltage is derived taking into account the distribution of high-energy electrons in the space-charge region of the p–i–n diode. Ways of optimizing the converter parameters by improving the technology of diodes and optimizing the emitter active layer and i-region thicknesses of the semiconductor converter are shown. The distribution of the conversion losses to the source and radiation detector and the losses to high-energy electron entry into the semiconductor is calculated. Experimental values of the conversion efficiency of 0.4–0.7% are in good agreement with the calculated parameters.

  12. Optimization of rotational arc station parameter optimized radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P.; Ungun, B. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Boyd, S. [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Xing, L., E-mail: lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States)

    2016-09-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  13. Optimization of rotational arc station parameter optimized radiation therapy

    International Nuclear Information System (INIS)

    Dong, P.; Ungun, B.; Boyd, S.; Xing, L.

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  14. Optimizing human-system interface automation design based on a skill-rule-knowledge framework

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.

    2010-01-01

    This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.

  15. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  16. Automated diagnostics scoping study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Quadrel, R.W.; Lash, T.A.

    1994-06-01

    The objective of the Automated Diagnostics Scoping Study was to investigate the needs for diagnostics in building operation and to examine some of the current technologies in automated diagnostics that can address these needs. The study was conducted in two parts. In the needs analysis, the authors interviewed facility managers and engineers at five building sites. In the technology survey, they collected published information on automated diagnostic technologies in commercial and military applications as well as on technologies currently under research. The following describe key areas that the authors identify for the research, development, and deployment of automated diagnostic technologies: tools and techniques to aid diagnosis during building commissioning, especially those that address issues arising from integrating building systems and diagnosing multiple simultaneous faults; technologies to aid diagnosis for systems and components that are unmonitored or unalarmed; automated capabilities to assist cause-and-effect exploration during diagnosis; inexpensive, reliable sensors, especially those that expand the current range of sensory input; technologies that aid predictive diagnosis through trend analysis; integration of simulation and optimization tools with building automation systems to optimize control strategies and energy performance; integration of diagnostic, control, and preventive maintenance technologies. By relating existing technologies to perceived and actual needs, the authors reached some conclusions about the opportunities for automated diagnostics in building operation. Some of a building operator`s needs can be satisfied by off-the-shelf hardware and software. Other needs are not so easily satisfied, suggesting directions for future research. Their conclusions and suggestions are offered in the final section of this study.

  17. Grey fuzzy logic approach for the optimization of DLC thin film coating process parameters using PACVD technique

    Science.gov (United States)

    Ghadai, R. K.; Das, P. P.; Shivakoti, I.; Mondal, S. C.; Swain, B. P.

    2017-07-01

    Diamond-like carbon (DLC) coatings are widely used in medical, manufacturing and aerospace industries due to their excellent mechanical, biological, optical and tribological properties. The selection of optimal process parameters for efficient characteristics of DLC film is always a challenging issue for the materials science researchers. The optimal combination of the process parameters involved in the deposition of DLC films provide a better result, which subsequently help other researchers to choose the process parameters. In the present work Grey Relation Analysis (GRA) and Fuzzy-logic are being used for the optimization of process parameters in DLC film coating by using plasma assist chemical vapour deposition (PACVD) technique. The bias voltage, bias frequency, deposition pressure, gas composition are considered as input process parameters and hardness (GPa), Young's modulus (GPa), ratio between diamond to graphic fraction, (Id/Ig) ratio are considered as response parameters. The input parameters are optimized by grey fuzzy analysis. The contribution of individual input parameter is done by ANOVA. In this analysis found that bias voltage having the least influence and gas composition has highest influence in the PACVD deposited DLC films. The grey fuzzy analysis results indicated that optimum results for bias voltage, bias frequency, deposition pressure, gas composition for the DLC thin films are -50 V, 6 kHz, 4 μbar and 60:40 % respectively.

  18. Comparison of direct machine parameter optimization versus fluence optimization with sequential sequencing in IMRT of hypopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Dobler, Barbara; Pohl, Fabian; Bogner, Ludwig; Koelbl, Oliver

    2007-01-01

    To evaluate the effects of direct machine parameter optimization in the treatment planning of intensity-modulated radiation therapy (IMRT) for hypopharyngeal cancer as compared to subsequent leaf sequencing in Oncentra Masterplan v1.5. For 10 hypopharyngeal cancer patients IMRT plans were generated in Oncentra Masterplan v1.5 (Nucletron BV, Veenendal, the Netherlands) for a Siemens Primus linear accelerator. For optimization the dose volume objectives (DVO) for the planning target volume (PTV) were set to 53 Gy minimum dose and 59 Gy maximum dose, in order to reach a dose of 56 Gy to the average of the PTV. For the parotids a median dose of 22 Gy was allowed and for the spinal cord a maximum dose of 35 Gy. The maximum DVO to the external contour of the patient was set to 59 Gy. The treatment plans were optimized with the direct machine parameter optimization ('Direct Step & Shoot', DSS, Raysearch Laboratories, Sweden) newly implemented in Masterplan v1.5 and the fluence modulation technique ('Intensity Modulation', IM) which was available in previous versions of Masterplan already. The two techniques were compared with regard to compliance to the DVO, plan quality, and number of monitor units (MU) required per fraction dose. The plans optimized with the DSS technique met the DVO for the PTV significantly better than the plans optimized with IM (p = 0.007 for the min DVO and p < 0.0005 for the max DVO). No significant difference could be observed for compliance to the DVO for the organs at risk (OAR) (p > 0.05). Plan quality, target coverage and dose homogeneity inside the PTV were superior for the plans optimized with DSS for similar dose to the spinal cord and lower dose to the normal tissue. The mean dose to the parotids was lower for the plans optimized with IM. Treatment plan efficiency was higher for the DSS plans with (901 ± 160) MU compared to (1151 ± 157) MU for IM (p-value < 0.05). Renormalization of the IM plans to the mean of the

  19. Robust Optimization for Household Load Scheduling with Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Jidong Wang

    2018-04-01

    Full Text Available Home energy management systems (HEMS face many challenges of uncertainty, which have a great impact on the scheduling of home appliances. To handle the uncertain parameters in the household load scheduling problem, this paper uses a robust optimization method to rebuild the household load scheduling model for home energy management. The model proposed in this paper can provide the complete robust schedules for customers while considering the disturbance of uncertain parameters. The complete robust schedules can not only guarantee the customers’ comfort constraints but also cooperatively schedule the electric devices for cost minimization and load shifting. Moreover, it is available for customers to obtain multiple schedules through setting different robust levels while considering the trade-off between the comfort and economy.

  20. High-resolution MRI of the labyrinth. Optimization of scan parameters with 3D-FSE

    International Nuclear Information System (INIS)

    Sakata, Motomichi; Harada, Kuniaki; Shirase, Ryuji; Kumagai, Akiko; Ogasawara, Masashi

    2005-01-01

    The aim of our study was to optimize the parameters of high-resolution MRI of the labyrinth with a 3D fast spin-echo (3D-FSE) sequence. We investigated repetition time (TR), echo time (TE), Matrix, field of view (FOV), and coil selection in terms of CNR (contrast-to-noise ratio) and SNR (signal-to-noise ratio) by comparing axial images and/or three-dimensional images. The optimal 3D-FSE sequence parameters were as follows: 1.5 Tesla MR unit (Signa LX, GE Medical Systems), 3D-FSE sequence, dual 3-inch surface coil, acquisition time=12.08 min, TR=5000 msec, TE=300 msec, 3 number of excitations (NEX), FOV=12 cm, matrix=256 x 256, slice thickness=0.5 mm/0.0 sp, echo train=64, bandwidth=±31.5 kHz. High-resolution MRI of the labyrinth using the optimized 3D-FSE sequence parameters permits visualization of important anatomic details (such as scala tympani and scala vestibuli), making it possible to determine inner ear anomalies and the patency of cochlear turns. To obtain excellent heavily T2-weighted axial and three-dimensional images in the labyrinth, high CNR, SNR, and spatial resolution are significant factors at the present time. Furthermore, it is important not only to optimize the scan parameters of 3D-FSE but also to select an appropriate coil for high-resolution MRI of the labyrinth. (author)

  1. Manual versus automated blood sampling

    DEFF Research Database (Denmark)

    Teilmann, A C; Kalliokoski, Otto; Sørensen, Dorte B

    2014-01-01

    Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters......, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal...... corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters...

  2. Study of dose calculation and beam parameters optimization with genetic algorithm in IMRT

    International Nuclear Information System (INIS)

    Chen Chaomin; Tang Mutao; Zhou Linghong; Lv Qingwen; Wang Zhuoyu; Chen Guangjie

    2006-01-01

    Objective: To study the construction of dose calculation model and the method of automatic beam parameters selection in IMRT. Methods: The three-dimension convolution dose calculation model of photon was constructed with the methods of Fast Fourier Transform. The objective function based on dose constrain was used to evaluate the fitness of individuals. The beam weights were optimized with genetic algorithm. Results: After 100 iterative analyses, the treatment planning system produced highly conformal and homogeneous dose distributions. Conclusion: the throe-dimension convolution dose calculation model of photon gave more accurate results than the conventional models; genetic algorithm is valid and efficient in IMRT beam parameters optimization. (authors)

  3. Automated analysis of small animal PET studies through deformable registration to an atlas

    International Nuclear Information System (INIS)

    Gutierrez, Daniel F.; Zaidi, Habib

    2012-01-01

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  4. Extensible automated dispersive liquid–liquid microextraction

    Energy Technology Data Exchange (ETDEWEB)

    Li, Songqing; Hu, Lu; Chen, Ketao; Gao, Haixiang, E-mail: hxgao@cau.edu.cn

    2015-05-04

    Highlights: • An extensible automated dispersive liquid–liquid microextraction was developed. • A fully automatic SPE workstation with a modified operation program was used. • Ionic liquid-based in situ DLLME was used as model method. • SPE columns packed with nonwoven polypropylene fiber was used for phase separation. • The approach was applied to the determination of benzoylurea insecticides in water. - Abstract: In this study, a convenient and extensible automated ionic liquid-based in situ dispersive liquid–liquid microextraction (automated IL-based in situ DLLME) was developed. 1-Octyl-3-methylimidazolium bis[(trifluoromethane)sulfonyl]imide ([C{sub 8}MIM]NTf{sub 2}) is formed through the reaction between [C{sub 8}MIM]Cl and lithium bis[(trifluoromethane)sulfonyl]imide (LiNTf{sub 2}) to extract the analytes. Using a fully automatic SPE workstation, special SPE columns packed with nonwoven polypropylene (NWPP) fiber, and a modified operation program, the procedures of the IL-based in situ DLLME, including the collection of a water sample, injection of an ion exchange solvent, phase separation of the emulsified solution, elution of the retained extraction phase, and collection of the eluent into vials, can be performed automatically. The developed approach, coupled with high-performance liquid chromatography–diode array detection (HPLC–DAD), was successfully applied to the detection and concentration determination of benzoylurea (BU) insecticides in water samples. Parameters affecting the extraction performance were investigated and optimized. Under the optimized conditions, the proposed method achieved extraction recoveries of 80% to 89% for water samples. The limits of detection (LODs) of the method were in the range of 0.16–0.45 ng mL{sup −1}. The intra-column and inter-column relative standard deviations (RSDs) were <8.6%. Good linearity (r > 0.9986) was obtained over the calibration range from 2 to 500 ng mL{sup −1}. The proposed

  5. Extensible automated dispersive liquid–liquid microextraction

    International Nuclear Information System (INIS)

    Li, Songqing; Hu, Lu; Chen, Ketao; Gao, Haixiang

    2015-01-01

    Highlights: • An extensible automated dispersive liquid–liquid microextraction was developed. • A fully automatic SPE workstation with a modified operation program was used. • Ionic liquid-based in situ DLLME was used as model method. • SPE columns packed with nonwoven polypropylene fiber was used for phase separation. • The approach was applied to the determination of benzoylurea insecticides in water. - Abstract: In this study, a convenient and extensible automated ionic liquid-based in situ dispersive liquid–liquid microextraction (automated IL-based in situ DLLME) was developed. 1-Octyl-3-methylimidazolium bis[(trifluoromethane)sulfonyl]imide ([C 8 MIM]NTf 2 ) is formed through the reaction between [C 8 MIM]Cl and lithium bis[(trifluoromethane)sulfonyl]imide (LiNTf 2 ) to extract the analytes. Using a fully automatic SPE workstation, special SPE columns packed with nonwoven polypropylene (NWPP) fiber, and a modified operation program, the procedures of the IL-based in situ DLLME, including the collection of a water sample, injection of an ion exchange solvent, phase separation of the emulsified solution, elution of the retained extraction phase, and collection of the eluent into vials, can be performed automatically. The developed approach, coupled with high-performance liquid chromatography–diode array detection (HPLC–DAD), was successfully applied to the detection and concentration determination of benzoylurea (BU) insecticides in water samples. Parameters affecting the extraction performance were investigated and optimized. Under the optimized conditions, the proposed method achieved extraction recoveries of 80% to 89% for water samples. The limits of detection (LODs) of the method were in the range of 0.16–0.45 ng mL −1 . The intra-column and inter-column relative standard deviations (RSDs) were <8.6%. Good linearity (r > 0.9986) was obtained over the calibration range from 2 to 500 ng mL −1 . The proposed method opens a new avenue

  6. A COMPARATIVE STUDY OF AUTOMATION STRATEGIES AT VOLKSWAGEN IN GERMANY AND SOUTH AFRICA

    Directory of Open Access Journals (Sweden)

    O. Wessel

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The final car assembly lines at Volkswagen’s production sites in Germany and South Africa are analysed to determine the best automation level based on cost, productivity, quality, and flexibility for a plant location. The methodology used is proposed by the Fraunhofer Institute. The final assembly processes are analysed and classified according to the automation level. The operations are evaluated at every level of automation based on information from existing factories. If the best levels of automation for all the parameters correspond, the optimal level of automation for a plant is reached. Otherwise, improvements and/or additional considerations are required to optimise the automation level. The result of the analysis indicates that the highest automation level is not necessarily the best in terms of cost and quality, and some de-automation is required. The analysis also shows that a low automation level can result in poor product quality and low productivity. The best automation strategy should be based on the analysis of all the aspects of the process in the local context.

    AFRIKAANSE OPSOMMING: Die finale monteerlyne by Volkswagen se aanlegte in Duitsland en Suid-Afrika is ontleed om die beste outomatisasievlak te bepaal gebaseer op koste, produktiwiteit, gehalte en aanpasbaarheid gegee die ligging. Die metodologie wat gevolg is, word voorgestel deur die Fraunhofer Instituut. Die finale monteerprosesse is ontleed volgens outomatisasievlak. Die aktiwiteite is ontleed teen elke vlak van outomatisasie gebaseer op inligting van bestaande vervaardigingsaanlegte. Indien die beste outomatisasievlakke vir alle parameters ooreenstem, dan is die optimale vlak van outomatisasie bereik. Indien nie, is verbeterings en/of addisionele oorwegings nodig om die outomatisasievlak te optimiseer. Die resultaat van die ontleding toon dat die grootste mate van outomatisasie nie noodwendig die beste is in terme van koste en gehalte nie

  7. Process Parameters Optimization of 14nm MOSFET Using 2-D Analytical Modelling

    Directory of Open Access Journals (Sweden)

    Noor Faizah Z.A.

    2016-01-01

    Full Text Available This paper presents the modeling and optimization of 14nm gate length CMOS transistor which is down-scaled from previous 32nm gate length. High-k metal gate material was used in this research utilizing Hafnium Dioxide (HfO2 as dielectric and Tungsten Silicide (WSi2 and Titanium Silicide (TiSi2 as a metal gate for NMOS and PMOS respectively. The devices are fabricated virtually using ATHENA module and characterized its performance evaluation via ATLAS module; both in Virtual Wafer Fabrication (VWF of Silvaco TCAD Tools. The devices were then optimized through a process parameters variability using L9 Taguchi Method. There were four process parameter with two noise factor of different values were used to analyze the factor effect. The results show that the optimal value for both transistors are well within ITRS 2013 prediction where VTH and IOFF are 0.236737V and 6.995705nA/um for NMOS device and 0.248635 V and 5.26nA/um for PMOS device respectively.

  8. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  9. Study on high-speed cutting parameters optimization of AlMn1Cu based on neural network and genetic algorithm

    Directory of Open Access Journals (Sweden)

    Zhenhua Wang

    2016-04-01

    Full Text Available In this article, the cutting parameters optimization method for aluminum alloy AlMn1Cu in high-speed milling was studied in order to properly select the high-speed cutting parameters. First, a back propagation neural network model for predicting surface roughness of AlMn1Cu was proposed. The prediction model can improve the prediction accuracy and well work out the higher-order nonlinear relationship between surface roughness and cutting parameters. Second, considering the constraints of technical requirements on surface roughness, a mathematical model for optimizing cutting parameters based on the Bayesian neural network prediction model of surface roughness was established so as to obtain the maximum machining efficiency. The genetic algorithm adopting the homogeneous design to initialize population as well as steady-state reproduction without duplicates was also presented. The application indicates that the algorithm can effectively avoid precocity, strengthen global optimization, and increase the calculation efficiency. Finally, a case was presented on the application of the proposed cutting parameters optimization algorithm to optimize the cutting parameters.

  10. User-customized brain computer interfaces using Bayesian optimization.

    Science.gov (United States)

    Bashashati, Hossein; Ward, Rabab K; Bashashati, Ali

    2016-04-01

    The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject's brain characteristics. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  11. Characterization of PV panel and global optimization of its model parameters using genetic algorithm

    International Nuclear Information System (INIS)

    Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.

    2013-01-01

    Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules

  12. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  13. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-25

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of the hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.

  14. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  15. Optimization of Cutting Parameters on Delamination of Drilling Glass-Polyester Composites

    Directory of Open Access Journals (Sweden)

    Majid Habeeb Faidh-Allah

    2018-02-01

    Full Text Available This paper attempted to study the effect of cutting parameters (spindle speed and feed rate on delamination phenomena during the drilling glass-polyester composites. Drilling process was done by CNC machine with 10 mm diameter of high-speed steel (HSS drill bit. Taguchi technique with L16 orthogonal layout was used to analyze the effective parameters on delamination factor. The optimal experiment was no. 13 with spindle speed 1273 rpm and feed 0.05 mm/rev with minimum delamination factor 1.28.

  16. Saturne II synchroton injector parameters operation and control: computerization and optimization

    International Nuclear Information System (INIS)

    Lagniel, J.M.

    1983-01-01

    The injector control system has been studied, aiming at the beam quality improvement, the increasing of the versatility, and a better machine availability. It has been choosen to realize the three following functions: - acquisition of the principal parameters of the process, so as to control them quickly and to be warned if one of them is wrong (monitoring); - the control of those parameters, one by one or by families (starting, operating point); - the research of an optimal control (on a model or on the process itself) [fr

  17. Slot Parameter Optimization for Multiband Antenna Performance Improvement Using Intelligent Systems

    Directory of Open Access Journals (Sweden)

    Erdem Demircioglu

    2015-01-01

    Full Text Available This paper discusses bandwidth enhancement for multiband microstrip patch antennas (MMPAs using symmetrical rectangular/square slots etched on the patch and the substrate properties. The slot parameters on MMPA are modeled using soft computing technique of artificial neural networks (ANN. To achieve the best ANN performance, Particle Swarm Optimization (PSO and Differential Evolution (DE are applied with ANN’s conventional training algorithm in optimization of the modeling performance. In this study, the slot parameters are assumed as slot distance to the radiating patch edge, slot width, and length. Bandwidth enhancement is applied to a formerly designed MMPA fed by a microstrip transmission line attached to the center pin of 50 ohm SMA connecter. The simulated antennas are fabricated and measured. Measurement results are utilized for training the artificial intelligence models. The ANN provides 98% model accuracy for rectangular slots and 97% for square slots; however, ANFIS offer 90% accuracy with lack of resonance frequency tracking.

  18. Automation in biological crystallization.

    Science.gov (United States)

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  19. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    Science.gov (United States)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  20. Optimization of process parameters in welding of dissimilar steels using robot TIG welding

    Science.gov (United States)

    Navaneeswar Reddy, G.; VenkataRamana, M.

    2018-03-01

    Robot TIG welding is a modern technique used for joining two work pieces with high precision. Design of Experiments is used to conduct experiments by varying weld parameters like current, wire feed and travelling speed. The welding parameters play important role in joining of dissimilar stainless steel SS 304L and SS430. In this work, influences of welding parameter on Robot TIG Welded specimens are investigated using Response Surface Methodology. The Micro Vickers hardness tests of the weldments are measured. The process parameters are optimized to maximize the hardness of the weldments.

  1. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    Science.gov (United States)

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Improved Hybrid Fireworks Algorithm-Based Parameter Optimization in High-Order Sliding Mode Control of Hypersonic Vehicles

    Directory of Open Access Journals (Sweden)

    Xiaomeng Yin

    2018-01-01

    Full Text Available With respect to the nonlinear hypersonic vehicle (HV dynamics, achieving a satisfactory tracking control performance under uncertainties is always a challenge. The high-order sliding mode control (HOSMC method with strong robustness has been applied to HVs. However, there are few methods for determining suitable HOSMC parameters for an efficacious control of HV, given that the uncertainties are randomly distributed. In this study, we introduce a hybrid fireworks algorithm- (FWA- based parameter optimization into HV control design to satisfy the design requirements with high probability. First, the complex relation between design parameters and the cost function that evaluates the likelihood of system instability and violation of design requirements is modeled via stochastic robustness analysis. Subsequently, we propose an efficient hybrid FWA to solve the complex optimization problem concerning the uncertainties. The efficiency of the proposed hybrid FWA-based optimization method is demonstrated in the search of the optimal HV controller, in which the proposed method exhibits a better performance when compared with other algorithms.

  3. Optimization of cutting parameters for machining time in turning process

    Science.gov (United States)

    Mavliutov, A. R.; Zlotnikov, E. G.

    2018-03-01

    This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.

  4. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    Energy Technology Data Exchange (ETDEWEB)

    Portnoy, David, E-mail: david.portnoy@jhuapl.edu [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Feuerbach, Robert; Heimberg, Jennifer [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States)

    2011-10-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of

  5. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    International Nuclear Information System (INIS)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-01-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of spectra

  6. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    Science.gov (United States)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-10-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the "threat" set of spectra

  7. Energetical optimization and parameters selection for a fixed faceted mirror concentrator

    International Nuclear Information System (INIS)

    Nicolas, R.O.; Duran, J.C.; Dawidowski, L.E.

    1990-01-01

    A method which allows to select the parameters of a cylindrical solar collector by means of an energetical optimization is presented. In particular, the energy collected by the operating fluid and the collection efficiency of a Fixed Faceted Mirror Concentrator (FFMC) are obtained and compared for different sets of parameters. To this end, the two-dimensional optical analysis for non-perfect cylindrical concentrators presented previously is used. Some graphs analyzing the variations of the yearly efficiency of the FFMC as a function of those parameters are given. Finally, the possibility of using a second concentrator in the receiver plane of the FFMC in order to improve the whole efficiency of the prototype is also analyzed. (Author)

  8. Global optimization framework for solar building design

    Science.gov (United States)

    Silva, N.; Alves, N.; Pascoal-Faria, P.

    2017-07-01

    The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.

  9. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    Science.gov (United States)

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  10. Automated voxelization of 3D atom probe data through kernel density estimation

    International Nuclear Information System (INIS)

    Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna

    2015-01-01

    Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the γ / γ’ interface in a Ni–Al–Cr superalloy. - Highlights: • Develop approach for standardizing aspects of atom probe reconstruction. • Use Kernel density estimators to select optimal voxel sizes in an unsupervised manner. • Perform interfacial analysis of Ni–Al–Cr superalloy, using new automated approach. • Optimize voxel size to preserve the feature of interest and minimizing loss / noise.

  11. Factorization and the synthesis of optimal feedback gains for distributed parameter systems

    Science.gov (United States)

    Milman, Mark H.; Scheid, Robert E.

    1990-01-01

    An approach based on Volterra factorization leads to a new methodology for the analysis and synthesis of the optimal feedback gain in the finite-time linear quadratic control problem for distributed parameter systems. The approach circumvents the need for solving and analyzing Riccati equations and provides a more transparent connection between the system dynamics and the optimal gain. The general results are further extended and specialized for the case where the underlying state is characterized by autonomous differential-delay dynamics. Numerical examples are given to illustrate the second-order convergence rate that is derived for an approximation scheme for the optimal feedback gain in the differential-delay problem.

  12. Optimal process parameters for phosphorus spin-on-doping of germanium

    Energy Technology Data Exchange (ETDEWEB)

    Boldrini, Virginia [Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Via Marzolo 8, I-35131 Padova (Italy); INFN-LNL, Viale dell’Università 2, I-35020 Legnaro, Padova (Italy); Carturan, Sara Maria, E-mail: sara.carturan@lnl.infn.it [Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Via Marzolo 8, I-35131 Padova (Italy); INFN-LNL, Viale dell’Università 2, I-35020 Legnaro, Padova (Italy); Maggioni, Gianluigi; Napolitani, Enrico [Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Via Marzolo 8, I-35131 Padova (Italy); INFN-LNL, Viale dell’Università 2, I-35020 Legnaro, Padova (Italy); Napoli, Daniel Ricardo [INFN-LNL, Viale dell’Università 2, I-35020 Legnaro, Padova (Italy); Camattari, Riccardo [INFN Sezione di Ferrara, Dipartimento di Fisica, Università di Ferrara, Via Saragat 1, 44122, Ferrara (Italy); De Salvador, Davide [Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Via Marzolo 8, I-35131 Padova (Italy); INFN-LNL, Viale dell’Università 2, I-35020 Legnaro, Padova (Italy)

    2017-01-15

    Highlights: • Optimized protocol for the application of phosphorus spin-on-doping to Ge surface. • Homogeneous n-type Ge layers, fully electrically active, are obtained. • Crucial parameters for SOD curing are relative humidity, time and temperature. • Characterization of Ge loss from the surface into the SOD film by diffusion. • Spike annealing in standard tube chamber furnace are performed. - Abstract: The fabrication of homogeneously doped germanium layers characterized by total electrical activation is currently a hot topic in many fields, such as microelectronics, photovoltaics, optics and radiation detectors. Phosphorus spin-on-doping technique has been implemented on Ge wafers, by developing a protocol for the curing process and subsequent diffusion annealing for optimal doping. Parameters such as relative humidity and curing time turned out to affect the surface morphology, the degree of reticulation reached by the dopant source and the amount of dopant available for diffusion. After spike annealing in a conventional furnace, diffusion profiles and electrical properties have been measured. Ge loss from the surface during high-temperature annealing, due to diffusion into the source film, has been observed and quantified.

  13. Fully Automated Volumetric Modulated Arc Therapy Plan Generation for Prostate Cancer Patients

    International Nuclear Information System (INIS)

    Voet, Peter W.J.; Dirkx, Maarten L.P.; Breedveld, Sebastiaan; Al-Mamgani, Abrahim; Incrocci, Luca; Heijmen, Ben J.M.

    2014-01-01

    Purpose: To develop and evaluate fully automated volumetric modulated arc therapy (VMAT) treatment planning for prostate cancer patients, avoiding manual trial-and-error tweaking of plan parameters by dosimetrists. Methods and Materials: A system was developed for fully automated generation of VMAT plans with our commercial clinical treatment planning system (TPS), linked to the in-house developed Erasmus-iCycle multicriterial optimizer for preoptimization. For 30 randomly selected patients, automatically generated VMAT plans (VMAT auto ) were compared with VMAT plans generated manually by 1 expert dosimetrist in the absence of time pressure (VMAT man ). For all treatment plans, planning target volume (PTV) coverage and sparing of organs-at-risk were quantified. Results: All generated plans were clinically acceptable and had similar PTV coverage (V 95%  > 99%). For VMAT auto and VMAT man plans, the organ-at-risk sparing was similar as well, although only the former plans were generated without any planning workload. Conclusions: Fully automated generation of high-quality VMAT plans for prostate cancer patients is feasible and has recently been implemented in our clinic

  14. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mehran Tamjidy

    2017-05-01

    Full Text Available The development of Friction Stir Welding (FSW has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ, a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS and Shannon’s entropy.

  15. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    Science.gov (United States)

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  16. Optimization of Minimum Quantity Lubricant Conditions and Cutting Parameters in Hard Milling of AISI H13 Steel

    Directory of Open Access Journals (Sweden)

    The-Vinh Do

    2016-03-01

    Full Text Available As a successful solution applied to hard machining, the minimum quantity lubricant (MQL has already been established as an alternative to flood coolant processing. The optimization of MQL parameters and cutting parameters under MQL condition are essential and pressing. The study was divided into two parts. In the first part of this study, the Taguchi method was applied to find the optimal values of MQL condition in the hard milling of AISI H13 with consideration of reduced surface roughness. The L9 orthogonal array, the signal-to-noise (S/N ratio and analysis of variance (ANOVA were employed to analyze the effect of the performance characteristics of MQL parameters (i.e., cutting fluid type, pressure, and fluid flow on good surface finish. In the results section, lubricant and pressure of MQL condition are determined to be the most influential factors which give a statistically significant effect on machined surfaces. A verifiable experiment was conducted to demonstrate the reliability of the results. In the second section, the optimized MQL parameters were applied in a series of experiments to find out cutting parameters of hard milling. The Taguchi method was also used to optimize the cutting parameters in order to obtain the best surface roughness. The design of the experiment (DOE was implemented by using the L27 orthogonal array. Based on an analysis of the signal-to-noise response and ANOVA, the optimal values of cutting parameters (i.e., cutting speed, feed rate, depth-of-cut and hardness of workpiece were introduced. The results of the present work indicate feed rate is the factor having the most effect on surface roughness.

  17. Hardware-in-loop simulation of electric vehicles automated mechanical transmission system

    Energy Technology Data Exchange (ETDEWEB)

    Liao, C.; Wu, Y.; Wang, L. [Chinese Academy of Sciences, Beijing (China). Inst. of Electrical Engineering

    2009-03-11

    Automated mechanical transmission (AMT) can be used to enhance the performance of hybrid electric vehicles. In this study, hardware-in-loop (HIL) simulations were used to develop an AMT control system. HIL was used to simulate the running and fault status of the system as well as to optimize its performance. HIL was combined with a commercial simulation tool and an automatic code generation technology in a real time environment tool to develop the AMT control system. A hybrid vehicle system dynamics model was generated and then simulated in various real time operating vehicle environments. Virtual instrument technology was used to develop real time monitoring, parameter matching calibration, data acquisition and offline analyses for the optimization of the control system. Results of the analyses demonstrated that the AMT control system can be used to optimize the performance of hybrid electric vehicles. 5 refs., 9 figs.

  18. Application of HGSO to security based optimal placement and parameter setting of UPFC

    International Nuclear Information System (INIS)

    Tarafdar Hagh, Mehrdad; Alipour, Manijeh; Teimourzadeh, Saeed

    2014-01-01

    Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies

  19. Optimization of the fiber laser parameters for local high-temperature impact on metal

    Science.gov (United States)

    Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.

    2016-11-01

    This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.

  20. Automated system for calibration and control of the CHSPP-800 multichannel γ detector parameters

    International Nuclear Information System (INIS)

    Avvakumov, N.A.; Belikov, N.I.; Goncharenko, Yu.M.

    1987-01-01

    An automated system for adjustment, calibration and control of total absorption Cherenkov spectrometer is described. The system comprises a mechanical platform, capable of moving in two mutually perpendicular directions; movement detectors and limit switches; power unit, automation unit with remote control board. The automated system can operate both in manual control regime with coordinate control by a digital indicator, and in operation regime with computer according to special programs. The platform mounting accuracy is ± 0.1 mm. Application of the automated system has increased the rate of the course of the counter adjustment works 3-5 times

  1. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  2. Optimization of dissolution process parameters for uranium ore concentrate powders

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Reddy, D.M.; Reddy, A.L.V.; Tiwari, S.K.; Venkataswamy, J.; Setty, D.S.; Sheela, S.; Saibaba, N. [Nuclear Fuel Complex, Hyderabad (India)

    2013-07-01

    Nuclear fuel complex processes Uranium Ore Concentrate (UOC) for producing uranium dioxide powder required for the fabrication of fuel assemblies for Pressurized Heavy Water Reactor (PHWR)s in India. UOC is dissolved in nitric acid and further purified by solvent extraction process for producing nuclear grade UO{sub 2} powder. Dissolution of UOC in nitric acid involves complex nitric oxide based reactions, since it is in the form of Uranium octa oxide (U{sub 3}O{sub 8}) or Uranium Dioxide (UO{sub 2}). The process kinetics of UOC dissolution is largely influenced by parameters like concentration and flow rate of nitric acid, temperature and air flow rate and found to have effect on recovery of nitric oxide as nitric acid. The plant scale dissolution of 2 MT batch in a single reactor is studied and observed excellent recovery of oxides of nitrogen (NO{sub x}) as nitric acid. The dissolution process is automated by PLC based Supervisory Control and Data Acquisition (SCADA) system for accurate control of process parameters and successfully dissolved around 200 Metric Tons of UOC. The paper covers complex chemistry involved in UOC dissolution process and also SCADA system. The solid and liquid reactions were studied along with multiple stoichiometry of nitrous oxide generated. (author)

  3. Combination of Compensations and Multi-Parameter Coil for Efficiency Optimization of Inductive Power Transfer System

    Directory of Open Access Journals (Sweden)

    Guozhen Hu

    2017-12-01

    Full Text Available A loosely coupled inductive power transfer (IPT system for industrial track applications has been researched in this paper. The IPT converter using primary Inductor-Capacitor-Inductor (LCL network and secondary parallel-compensations is analyzed combined coil design for optimal operating efficiency. Accurate mathematical analytical model and expressions of self-inductance and mutual inductance are proposed to achieve coil parameters. Furthermore, the optimization process is performed combined with the proposed resonant compensations and coil parameters. The results are evaluated and discussed using finite element analysis (FEA. Finally, an experimental prototype is constructed to verify the proposed approach and the experimental results show that the optimization can be better applied to industrial track distributed IPT system.

  4. Themoeconomic optimization of triple pressure heat recovery steam generator operating parameters for combined cycle plants

    Directory of Open Access Journals (Sweden)

    Mohammd Mohammed S.

    2015-01-01

    Full Text Available The aim of this work is to develop a method for optimization of operating parameters of a triple pressure heat recovery steam generator. Two types of optimization: (a thermodynamic and (b thermoeconomic were preformed. The purpose of the thermodynamic optimization is to maximize the efficiency of the plant. The selected objective for this purpose is minimization of the exergy destruction in the heat recovery steam generator (HRSG. The purpose of the thermoeconomic optimization is to decrease the production cost of electricity. Here, the total annual cost of HRSG, defined as a sum of annual values of the capital costs and the cost of the exergy destruction, is selected as the objective function. The optimal values of the most influencing variables are obtained by minimizing the objective function while satisfying a group of constraints. The optimization algorithm is developed and tested on a case of CCGT plant with complex configuration. Six operating parameters were subject of optimization: pressures and pinch point temperatures of every three (high, intermediate and low pressure steam stream in the HRSG. The influence of these variables on the objective function and production cost are investigated in detail. The differences between results of thermodynamic and the thermoeconomic optimization are discussed.

  5. Prediction and optimization of friction welding parameters for super duplex stainless steel (UNS S32760) joints

    International Nuclear Information System (INIS)

    Udayakumar, T.; Raja, K.; Afsal Husain, T.M.; Sathiya, P.

    2014-01-01

    Highlights: • Corrosion resistance and impact strength – predicted by response surface methodology. • Burn off length has highest significance on corrosion resistance. • Friction force is a strong determinant in changing impact strength. • Pareto front points generated by genetic algorithm aid to fix input control variable. • Pareto front will be a trade-off between corrosion resistance and impact strength. - Abstract: Friction welding finds widespread industrial use as a mass production process for joining materials. Friction welding process allows welding of several materials that are extremely difficult to fusion weld. Friction welding process parameters play a significant role in making good quality joints. To produce a good quality joint it is important to set up proper welding process parameters. This can be done by employing optimization techniques. This paper presents a multi objective optimization method for optimizing the process parameters during friction welding process. The proposed method combines the response surface methodology (RSM) with an intelligent optimization algorithm, i.e. genetic algorithm (GA). Corrosion resistance and impact strength of friction welded super duplex stainless steel (SDSS) (UNS S32760) joints were investigated considering three process parameters: friction force (F), upset force (U) and burn off length (B). Mathematical models were developed and the responses were adequately predicted. Direct and interaction effects of process parameters on responses were studied by plotting graphs. Burn off length has high significance on corrosion current followed by upset force and friction force. In the case of impact strength, friction force has high significance followed by upset force and burn off length. Multi objective optimization for maximizing the impact strength and minimizing the corrosion current (maximizing corrosion resistance) was carried out using GA with the RSM model. The optimization procedure resulted in

  6. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    Science.gov (United States)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  7. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  8. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  9. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  10. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  11. Feasibility evaluation of 3 automated cellular drug screening assays on a robotic workstation.

    Science.gov (United States)

    Soikkeli, Anne; Sempio, Cristina; Kaukonen, Ann Marie; Urtti, Arto; Hirvonen, Jouni; Yliperttula, Marjo

    2010-01-01

    This study presents the implementation and optimization of 3 cell-based assays on a TECAN Genesis workstation-the Caspase-Glo 3/7 and sulforhodamine B (SRB) screening assays and the mechanistic Caco-2 permeability protocol-and evaluates their feasibility for automation. During implementation, the dispensing speed to add drug solutions and fixative trichloroacetic acid and the aspiration speed to remove the supernatant immediately after fixation were optimized. Decontamination steps for cleaning the tips and pipetting tubing were also added. The automated Caspase-Glo 3/7 screen was successfully optimized with Caco-2 cells (Z' 0.7, signal-to-base ratio [S/B] 1.7) but not with DU-145 cells. In contrast, the automated SRB screen was successfully optimized with the DU-145 cells (Z' 0.8, S/B 2.4) but not with the Caco-2 cells (Z' -0.8, S/B 1.4). The automated bidirectional Caco-2 permeability experiments separated successfully low- and high-permeability compounds (Z' 0.8, S/B 84.2) and passive drug permeation from efflux-mediated transport (Z' 0.5, S/B 8.6). Of the assays, the homogeneous Caspase-Glo 3/7 assay benefits the most from automation, but also the heterogeneous SRB assay and Caco-2 permeability experiments gain advantages from automation.

  12. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  13. Chaotic invasive weed optimization algorithm with application to parameter estimation of chaotic systems

    International Nuclear Information System (INIS)

    Ahmadi, Mohamadreza; Mojallali, Hamed

    2012-01-01

    Highlights: ► A new meta-heuristic optimization algorithm. ► Integration of invasive weed optimization and chaotic search methods. ► A novel parameter identification scheme for chaotic systems. - Abstract: This paper introduces a novel hybrid optimization algorithm by taking advantage of the stochastic properties of chaotic search and the invasive weed optimization (IWO) method. In order to deal with the weaknesses associated with the conventional method, the proposed chaotic invasive weed optimization (CIWO) algorithm is presented which incorporates the capabilities of chaotic search methods. The functionality of the proposed optimization algorithm is investigated through several benchmark multi-dimensional functions. Furthermore, an identification technique for chaotic systems based on the CIWO algorithm is outlined and validated by several examples. The results established upon the proposed scheme are also supplemented which demonstrate superior performance with respect to other conventional methods.

  14. Automated Detection of HONcode Website Conformity Compared to Manual Detection: An Evaluation.

    Science.gov (United States)

    Boyer, Célia; Dolamic, Ljiljana

    2015-06-02

    than 50% for contact details (100% precision, 69% recall), authority (85% precision, 52% recall), and reference (75% precision, 56% recall). The results also revealed issues for some criteria such as date. Changing the "document" definition (ie, using the sentence instead of whole document as a unit of classification) within the automated system resolved some but not all of them. Study results indicate concordance between automated and expert manual compliance detection for authority, privacy, reference, and contact details. Results also indicate that using the same general parameters for automated detection of each criterion produces suboptimal results. Future work to configure optimal system parameters for each HONcode principle would improve results. The potential utility of integrating automated detection of HONcode conformity into future search engines is also discussed.

  15. A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology.

    Science.gov (United States)

    Al-Fahdawi, Shumoos; Qahwaji, Rami; Al-Waisy, Alaa S; Ipson, Stanley; Ferdousi, Maryam; Malik, Rayaz A; Brahma, Arun

    2018-07-01

    Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. An analysis to optimize the process parameters of friction stir welded ...

    African Journals Online (AJOL)

    The friction stir welding (FSW) of steel is a challenging task. Experiments are conducted here, with a tool having a conical pin of 0.4mm clearance. The process parameters are optimized by using the Taguchi technique based on Taguchi's L9 orthogonal array. Experiments have been conducted based on three process ...

  17. Bounds on Entanglement Dimensions and Quantum Graph Parameters via Noncommutative Polynomial Optimization

    NARCIS (Netherlands)

    Gribling, Sander; de Laat, David; Laurent, Monique

    2017-01-01

    In this paper we study bipartite quantum correlations using techniques from tracial polynomial optimization. We construct a hierarchy of semidefinite programming lower bounds on the minimal entanglement dimension of a bipartite correlation. This hierarchy converges to a new parameter: the minimal

  18. Automated processing of first-pass radioisotope ventriculography data to determine essential central circulation parameters

    Science.gov (United States)

    Krotov, Aleksei; Pankin, Victor

    2017-09-01

    The assessment of central circulation (including heart function) parameters is vital in the preventive diagnostics of inherent and acquired heart failures and during polychemotherapy. The protocols currently applied in Russia do not fully utilize the first-pass assessment (FPRNA) and that results in poor data formalization, while the FPRNA is the one of the fastest, affordable and compact methods among other radioisotope diagnostics protocols. A non-imaging algorithm basing on existing protocols has been designed to use the readings of an additional detector above vena subclavia to determine the total blood volume (TBV), not requiring blood sampling in contrast to current protocols. An automated processing of precordial detector readings is presented, in order to determine the heart strike volume (SV). Two techniques to estimate the ejection fraction (EF) of the heart are discussed.

  19. How to assess sustainability in automated manufacturing

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Rödger, Jan-Markus; Bey, Niki

    2015-01-01

    The aim of this paper is to describe how sustainability in automation can be assessed. The assessment method is illustrated using a case study of a robot. Three aspects of sustainability assessment in automation are identified. Firstly, we consider automation as part of a larger system...... that fulfills the market demand for a given functionality. Secondly, three aspects of sustainability have to be assessed: environment, economy, and society. Thirdly, automation is part of a system with many levels, with different actors on each level, resulting in meeting the market demand. In this system......, (sustainability) specifications move top-down, which helps avoiding sub-optimization and problem shifting. From these three aspects, sustainable automation is defined as automation that contributes to products that fulfill a market demand in a more sustainable way. The case study presents the carbon footprints...

  20. Application of advanced technology to space automation

    Science.gov (United States)

    Schappell, R. T.; Polhemus, J. T.; Lowrie, J. W.; Hughes, C. A.; Stephens, J. R.; Chang, C. Y.

    1979-01-01

    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits.

  1. Adaptive Multi-Agent Systems for Constrained Optimization

    Science.gov (United States)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  2. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    Science.gov (United States)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  3. Optimizing Parameters of Axial Pressure-Compounded Ultra-Low Power Impulse Turbines at Preliminary Design

    Science.gov (United States)

    Kalabukhov, D. S.; Radko, V. M.; Grigoriev, V. A.

    2018-01-01

    Ultra-low power turbine drives are used as energy sources in auxiliary power systems, energy units, terrestrial, marine, air and space transport within the confines of shaft power N td = 0.01…10 kW. In this paper we propose a new approach to the development of surrogate models for evaluating the integrated efficiency of multistage ultra-low power impulse turbine with pressure stages. This method is based on the use of existing mathematical models of ultra-low power turbine stage efficiency and mass. It has been used in a method for selecting the rational parameters of two-stage axial ultra-low power turbine. The article describes the basic features of an algorithm for two-stage turbine parameters optimization and for efficiency criteria evaluating. Pledged mathematical models are intended for use at the preliminary design of turbine drive. The optimization method was tested at preliminary design of an air starter turbine. Validation was carried out by comparing the results of optimization calculations and numerical gas-dynamic simulation in the Ansys CFX package. The results indicate a sufficient accuracy of used surrogate models for axial two-stage turbine parameters selection

  4. Method for semi-automated microscopy of filtration-enriched circulating tumor cells.

    Science.gov (United States)

    Pailler, Emma; Oulhen, Marianne; Billiot, Fanny; Galland, Alexandre; Auger, Nathalie; Faugeroux, Vincent; Laplace-Builhé, Corinne; Besse, Benjamin; Loriot, Yohann; Ngo-Camus, Maud; Hemanda, Merouan; Lindsay, Colin R; Soria, Jean-Charles; Vielh, Philippe; Farace, Françoise

    2016-07-14

    Circulating tumor cell (CTC)-filtration methods capture high numbers of CTCs in non-small-cell lung cancer (NSCLC) and metastatic prostate cancer (mPCa) patients, and hold promise as a non-invasive technique for treatment selection and disease monitoring. However filters have drawbacks that make the automation of microscopy challenging. We report the semi-automated microscopy method we developed to analyze filtration-enriched CTCs from NSCLC and mPCa patients. Spiked cell lines in normal blood and CTCs were enriched by ISET (isolation by size of epithelial tumor cells). Fluorescent staining was carried out using epithelial (pan-cytokeratins, EpCAM), mesenchymal (vimentin, N-cadherin), leukocyte (CD45) markers and DAPI. Cytomorphological staining was carried out with Mayer-Hemalun or Diff-Quik. ALK-, ROS1-, ERG-rearrangement were detected by filter-adapted-FISH (FA-FISH). Microscopy was carried out using an Ariol scanner. Two combined assays were developed. The first assay sequentially combined four-color fluorescent staining, scanning, automated selection of CD45(-) cells, cytomorphological staining, then scanning and analysis of CD45(-) cell phenotypical and cytomorphological characteristics. CD45(-) cell selection was based on DAPI and CD45 intensity, and a nuclear area >55 μm(2). The second assay sequentially combined fluorescent staining, automated selection of CD45(-) cells, FISH scanning on CD45(-) cells, then analysis of CD45(-) cell FISH signals. Specific scanning parameters were developed to deal with the uneven surface of filters and CTC characteristics. Thirty z-stacks spaced 0.6 μm apart were defined as the optimal setting, scanning 82 %, 91 %, and 95 % of CTCs in ALK-, ROS1-, and ERG-rearranged patients respectively. A multi-exposure protocol consisting of three separate exposure times for green and red fluorochromes was optimized to analyze the intensity, size and thickness of FISH signals. The semi-automated microscopy method reported here

  5. An intelligent approach to optimize the EDM process parameters using utility concept and QPSO algorithm

    Directory of Open Access Journals (Sweden)

    Chinmaya P. Mohanty

    2017-04-01

    Full Text Available Although significant research has gone into the field of electrical discharge machining (EDM, analysis related to the machining efficiency of the process with different electrodes has not been adequately made. Copper and brass are frequently used as electrode materials but graphite can be used as a potential electrode material due to its high melting point temperature and good electrical conductivity. In view of this, the present work attempts to compare the machinability of copper, graphite and brass electrodes while machining Inconel 718 super alloy. Taguchi’s L27 orthogonal array has been employed to collect data for the study and analyze effect of machining parameters on performance measures. The important performance measures selected for this study are material removal rate, tool wear rate, surface roughness and radial overcut. Machining parameters considered for analysis are open circuit voltage, discharge current, pulse-on-time, duty factor, flushing pressure and electrode material. From the experimental analysis, it is observed that electrode material, discharge current and pulse-on-time are the important parameters for all the performance measures. Utility concept has been implemented to transform a multiple performance characteristics into an equivalent performance characteristic. Non-linear regression analysis is carried out to develop a model relating process parameters and overall utility index. Finally, the quantum behaved particle swarm optimization (QPSO and particle swarm optimization (PSO algorithms have been used to compare the optimal level of cutting parameters. Results demonstrate the elegance of QPSO in terms of convergence and computational effort. The optimal parametric setting obtained through both the approaches is validated by conducting confirmation experiments.

  6. Accelerator optimization using a network control and acquisition system

    International Nuclear Information System (INIS)

    Geddes, Cameron G.R.; Catravas, P.E.; Faure, Jerome; Toth, Csaba; Tilborg, J. van; Leemans, Wim P.

    2002-01-01

    Accelerator optimization requires detailed study of many parameters, indicating the need for remote control and automated data acquisition systems. A control and data acquisition system based on a network of commodity PCs and applications with standards based inter-application communication is being built for the l'OASIS accelerator facility. This system allows synchronous acquisition of data at high (> 1 Hz) rates and remote control of the accelerator at low cost, allowing detailed study of the acceleration process

  7. An optimization strategy for a biokinetic model of inhaled radionuclides

    International Nuclear Information System (INIS)

    Shyr, L.J.; Griffith, W.C.; Boecker, B.B.

    1991-01-01

    Models for material disposition and dosimetry involve predictions of the biokinetics of the material among compartments representing organs and tissues in the body. Because of a lack of human data for most toxicants, many of the basic data are derived by modeling the results obtained from studies using laboratory animals. Such a biomathematical model is usually developed by adjusting the model parameters to make the model predictions match the measured retention and excretion data visually. The fitting process can be very time-consuming for a complicated model, and visual model selections may be subjective and easily biased by the scale or the data used. Due to the development of computerized optimization methods, manual fitting could benefit from an automated process. However, for a complicated model, an automated process without an optimization strategy will not be efficient, and may not produce fruitful results. In this paper, procedures for, and implementation of, an optimization strategy for a complicated mathematical model is demonstrated by optimizing a biokinetic model for 144Ce in fused aluminosilicate particles inhaled by beagle dogs. The optimized results using SimuSolv were compared to manual fitting results obtained previously using the model simulation software GASP. Also, statistical criteria provided by SimuSolv, such as likelihood function values, were used to help or verify visual model selections

  8. Sensitivity of Calibrated Parameters and Water Resource Estimates on Different Objective Functions and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Delaram Houshmand Kouchi

    2017-05-01

    Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.

  9. User-customized brain computer interfaces using Bayesian optimization

    Science.gov (United States)

    Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali

    2016-04-01

    Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  10. Physiochemical parameters optimization for enhanced nisin production by Lactococcus lactis (MTCC 440

    Directory of Open Access Journals (Sweden)

    Puspadhwaja Mall

    2010-02-01

    Full Text Available The influence of various physiochemical parameters on the growth of Lactococcus lactis sub sp. lactis MTCC 440 was studied at shake flask level for 20 h. Media optimization (MRS broth was studied to achieve enhanced growth of the organism and also nisin production. Bioassay of nisin was done with agar diffusion method using Streptococcus agalactae NCIM 2401 as indicator strain. MRS broth (6%, w/v with 0.15μg/ml of nisin supplemented with 0.5% (v/v skimmed milk was found to be the best for nisin production as well as for growth of L lactis. The production of nisin was strongly influenced by the presence of skimmed milk and nisin in MRS broth. The production of nisin was affected by the physical parameters and maximum nisin production was at 30(0C while the optimal temperature for biomass production was 37(0C.

  11. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    Science.gov (United States)

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.

  12. Lighting Automation Flying an Earthlike Habitat

    Science.gov (United States)

    Clark, Toni A.; Kolomenski, Andrei

    2017-01-01

    Currently, spacecraft lighting systems are not demonstrating innovations in automation due to perceived costs in designing circuitry for the communication and automation of lights. The majority of spacecraft lighting systems employ lamps or zone specific manual switches and dimmers. This type of 'hardwired' solution does not easily convert to automation. With advances in solid state lighting, the potential to enhance a spacecraft habitat is lost if the communication and automation problem is not tackled. If we are to build long duration environments, which provide earth-like habitats, minimize crew time, and optimize spacecraft power reserves, innovation in lighting automation is a must. This project researched the use of the DMX512 communication protocol originally developed for high channel count lighting systems. DMX512 is an internationally governed, industry-accepted, lighting communication protocol with wide industry support. The lighting industry markets a wealth of hardware and software that utilizes DMX512, and there may be incentive to space certify the system. Our goal in this research is to enable the development of automated spacecraft habitats for long duration missions. To transform how spacecraft lighting environments are automated, our project conducted a variety of tests to determine a potential scope of capability. We investigated utilization and application of an industry accepted lighting control protocol, DMX512 by showcasing how the lighting system could help conserve power, assist with lighting countermeasures, and utilize spatial body tracking. We hope evaluation and the demonstrations we built will inspire other NASA engineers, architects and researchers to consider employing DMX512 "smart lighting" capabilities into their system architecture. By using DMX512 we will prove the 'wheel' does not need to be reinvented in terms of smart lighting and future spacecraft can use a standard lighting protocol to produce an effective, optimized and

  13. Lighting Automation - Flying an Earthlike Habitat

    Science.gov (United States)

    Clark, Tori A. (Principal Investigator); Kolomenski, Andrei

    2017-01-01

    Currently, spacecraft lighting systems are not demonstrating innovations in automation due to perceived costs in designing circuitry for the communication and automation of lights. The majority of spacecraft lighting systems employ lamps or zone specific manual switches and dimmers. This type of 'hardwired' solution does not easily convert to automation. With advances in solid state lighting, the potential to enhance a spacecraft habitat is lost if the communication and automation problem is not tackled. If we are to build long duration environments, which provide earth-like habitats, minimize crew time, and optimize spacecraft power reserves, innovation in lighting automation is a must. This project researched the use of the DMX512 communication protocol originally developed for high channel count lighting systems. DMX512 is an internationally governed, industry-accepted, lighting communication protocol with wide industry support. The lighting industry markets a wealth of hardware and software that utilizes DMX512, and there may be incentive to space certify the system. Our goal in this research is to enable the development of automated spacecraft habitats for long duration missions. To transform how spacecraft lighting environments are automated, our project conducted a variety of tests to determine a potential scope of capability. We investigated utilization and application of an industry accepted lighting control protocol, DMX512 by showcasing how the lighting system could help conserve power, assist with lighting countermeasures, and utilize spatial body tracking. We hope evaluation and the demonstrations we built will inspire other NASA engineers, architects and researchers to consider employing DMX512 "smart lighting" capabilities into their system architecture. By using DMX512 we will prove the 'wheel' does not need to be reinvented in terms of smart lighting and future spacecraft can use a standard lighting protocol to produce an effective, optimized and

  14. Automated data acquisition technology development:Automated modeling and control development

    Science.gov (United States)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  15. Optimization of Reversed-Phase Peptide Liquid Chromatography Ultraviolet Mass Spectrometry Analyses Using an Automated Blending Methodology

    Science.gov (United States)

    Chakraborty, Asish B.; Berger, Scott J.

    2005-01-01

    The balance between chromatographic performance and mass spectrometric response has been evaluated using an automated series of experiments where separations are produced by the real-time automated blending of water with organic and acidic modifiers. In this work, the concentration effects of two acidic modifiers (formic acid and trifluoroacetic acid) were studied on the separation selectivity, ultraviolet, and mass spectrometry detector response, using a complex peptide mixture. Peptide retention selectivity differences were apparent between the two modifiers, and under the conditions studied, trifluoroacetic acid produced slightly narrower (more concentrated) peaks, but significantly higher electrospray mass spectrometry suppression. Trifluoroacetic acid suppression of electrospray signal and influence on peptide retention and selectivity was dominant when mixtures of the two modifiers were analyzed. Our experimental results indicate that in analyses where the analyzed components are roughly equimolar (e.g., a peptide map of a recombinant protein), the selectivity of peptide separations can be optimized by choice and concentration of acidic modifier, without compromising the ability to obtain effective sequence coverage of a protein. In some cases, these selectivity differences were explored further, and a rational basis for differentiating acidic modifier effects from the underlying peptide sequences is described. PMID:16522853

  16. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  17. Optimization of ohmic heating parameters for polyphenoloxidase inactivation in not-from-concentrate elstar apple juice using RSM

    DEFF Research Database (Denmark)

    Abedelmaksoud, Tarek; Mohsen, Sobhy Mohamed; Duedahl-Olesen, Lene

    2018-01-01

    In this study, optimization of ohmic heating (OH) process parameters (temperature and voltage gradient) to inactivate polyphenoloxidase (PPO) of not-from-concentrate (NFC) apple juice was conducted. Response surface methodology was used for optimization of OH parameters, where the voltage gradient...... and temperature on the PPO activity in the NFC apple juice was evaluated. Then the optimized condition was used to produce the NFC apple juice and the quality parameters were evaluated and compared to NFC apple juice prepared by conventional heating (CH). The studied parameters were: PPO activity, total phenolic......, total carotenoids, ascorbic acid, cloud value, color as well as physical properties (i.e., TSS, acidity, electric conductivity and viscosity). The reduction of PPO activities was 97 and 91% for OH (at 40 V/cm and 80 °C) and CH (at 90 °C and 60 s), respectively. The reduction of the ascorbic acid...

  18. A semi-automated volumetric software for segmentation and perfusion parameter quantification of brain tumors using 320-row multidetector computed tomography: a validation study

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Seol, Hae Young [Korea University Guro Hospital, Department of Radiology, Seoul (Korea, Republic of); Noh, Kyoung Jin [Soonchunhyang University, Department of Electronic Engineering, Asan (Korea, Republic of); Shim, Hackjoon [Toshiba Medical Systems Korea Co., Seoul (Korea, Republic of)

    2017-05-15

    We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ {sub c}) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP. (orig.)

  19. A semi-automated volumetric software for segmentation and perfusion parameter quantification of brain tumors using 320-row multidetector computed tomography: a validation study.

    Science.gov (United States)

    Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Noh, Kyoung Jin; Shim, Hackjoon; Seol, Hae Young

    2017-05-01

    We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ c ) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP.

  20. Multiplex protein pattern unmixing using a non-linear variable-weighted support vector machine as optimized by a particle swarm optimization algorithm.

    Science.gov (United States)

    Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin

    2016-01-15

    Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  2. Review of Automated Design and Optimization of MEMS

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca

    2007-01-01

    carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated....

  3. Parametric modeling and stagger angle optimization of an axial flow fan

    International Nuclear Information System (INIS)

    Li, M X; Zhang, C H; Liu, Y; Zheng, S Y

    2013-01-01

    Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%

  4. Automated beam steering using optimal control

    Energy Technology Data Exchange (ETDEWEB)

    Allen, C. K. (Christopher K.)

    2004-01-01

    We present a steering algorithm which, with the aid of a model, allows the user to specify beam behavior throughout a beamline, rather than just at specified beam position monitor (BPM) locations. The model is used primarily to compute the values of the beam phase vectors from BPM measurements, and to define cost functions that describe the steering objectives. The steering problem is formulated as constrained optimization problem; however, by applying optimal control theory we can reduce it to an unconstrained optimization whose dimension is the number of control signals.

  5. Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network.

    Science.gov (United States)

    Wang, Zhiwei; Liu, Chaoyue; Cheng, Danpeng; Wang, Liang; Yang, Xin; Cheng, Kwang-Ting

    2018-05-01

    Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.

  6. Are automated molecular dynamics simulations and binding free energy calculations realistic tools in lead optimization? An evaluation of the linear interaction energy (LIE) method

    NARCIS (Netherlands)

    Stjernschantz, E.M.; Marelius, J.; Medina, C.; Jacobsson, M.; Vermeulen, N.P.E.; Oostenbrink, C.

    2006-01-01

    An extensive evaluation of the linear interaction energy (LIE) method for the prediction of binding affinity of docked compounds has been performed, with an emphasis on its applicability in lead optimization. An automated setup is presented, which allows for the use of the method in an industrial

  7. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  8. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of measurement programs devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensors can be estimated. This is shown in a numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program...

  9. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1993-01-01

    The design of a measurement program devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensory can be estimated. This is shown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement...

  10. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1991-01-01

    The design of a measurement program devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost, i.e. the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contributions of the approach is that the optimal number of sensors can be estimated. This is shown in a numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement...

  11. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Ungun, B [Stanford University School of Medicine, Stanford, CA (United States); Stanford University School of Engineering, Stanford, CA (United States); Boyd, S [Stanford University School of Engineering, Stanford, CA (United States)

    2016-06-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  12. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    International Nuclear Information System (INIS)

    Dong, P; Xing, L; Ungun, B; Boyd, S

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  13. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  14. Multi-criteria optimization of chassis parameters of Nissan 200 SX for drifting competitions

    Science.gov (United States)

    Maniowski, M.

    2016-09-01

    The work objective is to increase performance of Nissan 200sx S13 prepared for a quasi-static state of drifting on a circular path with given constant radius (R=15 m) and tyre-road friction coefficient (μ = 0.9). First, a high fidelity “miMA” multibody model of the vehicle is formulated. Then, a multicriteria optimization problem is solved with one of the goals to maximize a stable drift angle (β) of the vehicle. The decision variables contain 11 parameters of the vehicle chassis (describing the wheel suspension stiffness and geometry) and 2 parameters responsible for a driver steering and accelerator actions, that control this extreme closed-loop manoeuvre. The optimized chassis setup results in the drift angle increase by 14% from 35 to 40 deg.

  15. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  16. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    Science.gov (United States)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  17. Identification of strategy parameters for particle swarm optimizer through Taguchi method

    Institute of Scientific and Technical Information of China (English)

    KHOSLA Arun; KUMAR Shakti; AGGARWAL K.K.

    2006-01-01

    Particle swarm optimization (PSO), like other evolutionary algorithms is a population-based stochastic algorithm inspired from the metaphor of social interaction in birds, insects, wasps, etc. It has been used for finding promising solutions in complex search space through the interaction of particles in a swarm. It is a well recognized fact that the performance of evolutionary algorithms to a great extent depends on the choice of appropriate strategy/operating parameters like population size,crossover rate, mutation rate, crossover operator, etc. Generally, these parameters are selected through hit and trial process, which is very unsystematic and requires rigorous experimentation. This paper proposes a systematic based on Taguchi method reasoning scheme for rapidly identifying the strategy parameters for the PSO algorithm. The Taguchi method is a robust design approach using fractional factorial design to study a large number of parameters with small number of experiments. Computer simulations have been performed on two benchmark functions-Rosenbrock function and Griewank function-to validate the approach.

  18. STATISTICAL APPROACH FOR MULTI CRITERIA OPTIMIZATION OF CUTTING PARAMETERS OF TURNING ON HEAT TREATED BERYLLIUM COPPER ALLOY

    Directory of Open Access Journals (Sweden)

    K. DEVAKI DEVI

    2017-08-01

    Full Text Available In machining operations, achieving desired performance features of the machined product, is really a challenging job. Because, these quality features are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects. This paper presents effective method and to determine optimal machining parameters in a turning operation on heat treated Beryllium copper alloy to minimize the surface roughness, cutting forces and work tool interface temperature along with the maximization of metal removal rate. The scope of this work is extended to Multi Objective Optimization. Response Surface Methodology is opted for preparing the design matrix, generating ANOVA, and optimization. A powerful model would be obtained with high accuracy to analyse the effect of each parameter on the output. The input parameters considered in this work are cutting speed, feed, depth of cut, work material (Annealed and Hardened and tool material (CBN and HSS.

  19. Optimization of process parameter for graft copolymerization of glycidyl methacrylate onto delignified banana fibers

    International Nuclear Information System (INIS)

    Selambakkannu, S.; Nor Azillah Fatimah Othman; Siti Fatahiyah Mohamad

    2016-01-01

    This paper focused on pre-treated banana fibers as a trunk polymer for optimization of radiation-induced graft copolymerization process parameters. Pre-treated banana fiber was grafted with glycidyl methacrylate (GMA) via electron beam irradiation. Optimization of grafting parameters in term of grafting yield was analyzed at numerous radiation dose, monomer concentration and reaction time. Grafting yield had been calculated gravimetrically against all the process parameters. The grafting yield at 40 kGy had increases from 14 % to 22.5 % at 1 h and 24 h of reaction time respectively. Grafting yield at 1 % of GMA was about 58 % and it increases to 187 % at 3 % GMA. The grafting of GMA onto pre-treated banana fibers confirmed with the characterization using FTIR, SEM and TGA. Grafting of GMA onto pre-treated fibers was successfully carried out and it was confirmed by the results obtained via the characterization. (author)

  20. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    Science.gov (United States)

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  1. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    Science.gov (United States)

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  2. Problems of complex automation of process at a NPP

    International Nuclear Information System (INIS)

    Naumov, A.V.

    1981-01-01

    The importance of theoretical investigation in determining the level and quality of NPP automation is discussed. Achievements gained in this direction are briefly reviewed on the example of domestic NPPs. Two models of the problem solution on function distribution between the operator and technical means are outlined. The processes subjected to automation are enumerated. Development of the optimal methods of power automatic control of power units is one of the most important problems of NPP automation. Automation of discrete operations especially during the start-up, shut-down or in imergency situations becomes important [ru

  3. Automating calibration, sensitivity and uncertainty analysis of complex models using the R package Flexible Modeling Environment (FME): SWAT as an example

    Science.gov (United States)

    Wu, Y.; Liu, S.

    2012-01-01

    Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty

  4. Parameters estimation online for Lorenz system by a novel quantum-behaved particle swarm optimization

    International Nuclear Information System (INIS)

    Gao Fei; Tong Hengqing; Li Zhuoqiu

    2008-01-01

    This paper proposes a novel quantum-behaved particle swarm optimization (NQPSO) for the estimation of chaos' unknown parameters by transforming them into nonlinear functions' optimization. By means of the techniques in the following three aspects: contracting the searching space self-adaptively; boundaries restriction strategy; substituting the particles' convex combination for their centre of mass, this paper achieves a quite effective search mechanism with fine equilibrium between exploitation and exploration. Details of applying the proposed method and other methods into Lorenz systems are given, and experiments done show that NQPSO has better adaptability, dependability and robustness. It is a successful approach in unknown parameter estimation online especially in the cases with white noises

  5. Optimization of the n-type HPGe detector parameters to theoretical determination of efficiency curves

    International Nuclear Information System (INIS)

    Rodriguez-Rodriguez, A.; Correa-Alfonso, C.M.; Lopez-Pino, N.; Padilla-Cabal, F.; D'Alessandro, K.; Corrales, Y.; Garcia-Alvarez, J. A.; Perez-Mellor, A.; Baly-Gil, L.; Machado, A.

    2011-01-01

    A highly detailed characterization of a 130 cm 3 n-type HPGe detector, employed in low - background gamma spectrometry measurements, was done. Precise measured data and several Monte Carlo (MC) calculations have been combined to optimize the detector parameters. HPGe crystal location inside the Aluminum end-cap as well as its dimensions, including the borehole radius and height, were determined from frontal and lateral scans. Additionally, X-ray radiography and Computed Axial Tomography (CT) studies were carried out to complement the information about detector features. Using seven calibrated point sources ( 241 Am, 133 Ba, 57,60 Co, 137 Cs, 22 Na and 152 Eu), photo-peak efficiency curves at three different source - detector distances (SDD) were obtained. Taking into account the experimental values, an optimization procedure by means of MC simulations (MCNPX 2.6 code) were performed. MC efficiency curves were calculated specifying the optimized detector parameters in the MCNPX input files. Efficiency calculation results agree with empirical data, showing relative deviations lesser 10%. (Author)

  6. [Simulation of vegetation indices optimizing under retrieval of vegetation biochemical parameters based on PROSPECT + SAIL model].

    Science.gov (United States)

    Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng

    2012-12-01

    This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.

  7. Innovation of Methods for Measurement and Modelling of Twisted Pair Parameters

    Directory of Open Access Journals (Sweden)

    Lukas Cepa

    2011-01-01

    Full Text Available The goal of this paper is to optimize a measurement methodology for the most accurate broadband modelling of characteristic impedance and other parameters for twisted pairs. Measured values and theirs comparison is presented in this article. Automated measurement facility was implemented at the Department of telecommunication of Faculty of electrical engineering of Czech technical university in Prague. Measurement facility contains RF switches allowing measurements up to 300 MHz or 1GHz. Measured twisted pair’s parameters can be obtained by measurement but for purposes of fundamental characteristics modelling is useful to define functions that model the properties of the twisted pair. Its primary and secondary parameters depend mostly on the frequency. For twisted pair deployment, we are interested in a frequency band range from 1 MHz to 100 MHz.

  8. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    International Nuclear Information System (INIS)

    Kao, Jim; Flicker, Dawn; Ide, Kayo; Ghil, Michael

    2006-01-01

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand

  9. Automated firewall analytics design, configuration and optimization

    CERN Document Server

    Al-Shaer, Ehab

    2014-01-01

    This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author

  10. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    Science.gov (United States)

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation

  11. Multi-response optimization of process parameters in friction stir welded AM20 magnesium alloy by Taguchi grey relational analysis

    Directory of Open Access Journals (Sweden)

    Prakash Kumar Sahu

    2015-03-01

    Full Text Available The purpose of this paper is to optimize the process parameter to get the better mechanical properties of friction stir welded AM20 magnesium alloy using Taguchi Grey relational analysis (GRA. The considered process parameters are welding speed, tool rotation speed, shoulder diameter and plunging depth. The experiments were carried out by using Taguchi's L18 factorial design of experiment. The processes parameters were optimized and ranked the parameters based on the GRA. The percentage influence of each process parameter on the weld quality was also quantified. A validation experimental run was conducted using optimal process condition, which was obtained from the analysis, to show the improvement in mechanical properties of the joint. This study also shows the feasibility of the GRA with Taguchi technique for improvement in welding quality of magnesium alloy.

  12. Optimization of design and operating parameters in a pilot scale Jameson cell for slime coal cleaning

    Energy Technology Data Exchange (ETDEWEB)

    Hacifazlioglu, Hasan; Toroglu, Ihsan [Department of Mining Engineering, University of Karaelmas, 67100 (Turkey)

    2007-07-15

    The Jameson flotation cell has been commonly used to treat a variety of ores (lead, zinc, copper etc.), coal and industrial minerals at commercial scale since 1989. It is especially known to be highly efficient at fine and ultrafine coal recovery. However, although the Jameson cell has quite a simple structure, it may be largely inefficient if the design and operating parameters chosen are not appropriate. In this study, the design and operating parameters of a pilot scale Jameson cell were optimized to obtain a desired metallurgical performance in the slime coal flotation. The optimized design parameters are the nozzle type, the height of the nozzle above the pulp level, the downcomer diameter and the immersion depth of the downcomer. Among the operating parameters optimized are the collector dosage, the frother dosage, the percentage of solids and the froth height. In the optimum conditions, a clean coal with an ash content of 14.90% was obtained from the sample slime having 45.30% ash with a combustible recovery of 74.20%. In addition, a new type nozzle was developed for the Jameson cell, which led to an increase of about 9% in the combustible recovery value.

  13. Optimizing supercritical antisolvent process parameters to minimize the particle size of paracetamol nanoencapsulated in L-polylactide

    Directory of Open Access Journals (Sweden)

    Kalani M

    2011-05-01

    Full Text Available Mahshid Kalani, Robiah Yunus, Norhafizah AbdullahChemical and Environmental Engineering, Faculty of Engineering, University Putra Malaysia, Selangor Darul Ehsan, MalaysiaBackground: The aim of this study was to optimize the different process parameters including pressure, temperature, and polymer concentration, to produce fine small spherical particles with a narrow particle size distribution using a supercritical antisolvent method for drug encapsulation. The interaction between different process parameters was also investigated.Methods and results: The optimized process parameters resulted in production of nanoencapsulated paracetamol in L-polylactide with a mean diameter of approximately 300 nm at 120 bar, 30°C, and a polymer concentration of 16 ppm. Thermogravimetric analysis illustrated the thermal characteristics of the nanoparticles. The high electrical charge on the surface of the nanoparticles caused the particles to repel each other, with the high negative zeta potential preventing flocculation.Conclusion: Our results illustrate the effect of different process parameters on particle size and morphology, and validate results obtained via RSM statistical software. Furthermore, the in vitro drug-release profile is consistent with a Korsmeyer–Peppas kinetic model.Keywords: supercritical, antisolvent, encapsulation, nanoparticles, biodegradable polymer, optimization, drug delivery

  14. Optimization of operating parameters in polysilicon chemical vapor deposition reactor with response surface methodology

    Science.gov (United States)

    An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen

    2018-05-01

    In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.

  15. Process parameter optimization based on principal components analysis during machining of hardened steel

    Directory of Open Access Journals (Sweden)

    Suryakant B. Chandgude

    2015-09-01

    Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.

  16. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    International Nuclear Information System (INIS)

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-01

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  17. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  18. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  19. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    Science.gov (United States)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  20. Optimization of CVD parameters for long ZnO NWs grown on ITO

    Indian Academy of Sciences (India)

    The optimization of chemical vapour deposition (CVD) parameters for long and vertically aligned (VA) ZnO nanowires (NWs) were investigated. Typical ZnO NWs as a single crystal grown on indium tin oxide (ITO)-coated glass substrate were successfully synthesized. First, the conducted side of ITO–glass substrate was ...