WorldWideScience

Sample records for automated parameter optimization

  1. IPO: a tool for automated optimization of XCMS parameters.

    Science.gov (United States)

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to

  2. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Vickie E.; Borreguero, Jose M. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Bhowmik, Debsindhu [Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Ganesh, Panchapakesan; Sumpter, Bobby G. [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Proffen, Thomas E. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Goswami, Monojoy, E-mail: goswamim@ornl.gov [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States)

    2017-07-01

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parameters which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.

  3. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    Directory of Open Access Journals (Sweden)

    Wenz Frederik

    2009-09-01

    Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The

  4. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning.

    Science.gov (United States)

    Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang

    2009-09-25

    Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 +/- 0.02%) and membership functions (3.9%), thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way to automatically perform

  5. Development of a neuro-fuzzy technique for automated parameter optimization of inverse treatment planning

    International Nuclear Information System (INIS)

    Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang

    2009-01-01

    Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way

  6. Multiscale Collaborative Optimization of Processing Parameters for Carbon Fiber/Epoxy Laminates Fabricated by High-Speed Automated Fiber Placement

    Directory of Open Access Journals (Sweden)

    Zhenyu Han

    2016-01-01

    Full Text Available Processing optimization is an important means to inhibit manufacturing defects efficiently. However, processing optimization used by experiments or macroscopic theories in high-speed automated fiber placement (AFP suffers from some restrictions, because multiscale effect of laying tows and their manufacturing defects could not be considered. In this paper, processing parameters, including compaction force, laying speed, and preheating temperature, are optimized by multiscale collaborative optimization in AFP process. Firstly, rational model between cracks and strain energy is revealed in order that the formative possibility of cracks could be assessed by using strain energy or its density. Following that, an antisequential hierarchical multiscale collaborative optimization method is presented to resolve multiscale effect of structure and mechanical properties for laying tows or cracks in high-speed automated fiber placement process. According to the above method and taking carbon fiber/epoxy tow as an example, multiscale mechanical properties of laying tow under different processing parameters are investigated through simulation, which includes recoverable strain energy (ALLSE of macroscale, strain energy density (SED of mesoscale, and interface absorbability and matrix fluidity of microscale. Finally, response surface method (RSM is used to optimize the processing parameters. Two groups of processing parameters, which have higher desirability, are obtained to achieve the purpose of multiscale collaborative optimization.

  7. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...

  8. Fully automated radiosynthesis of [18F]Fluoromisonidazole with single neutral alumina column purification: optimization of reaction parameters

    International Nuclear Information System (INIS)

    Nandy, S.K.; Rajan, M.G.R.

    2010-01-01

    1-H-1-(3-[ 18 F]fluoro-2-hydroxypropyl)-2-nitroimidazole ([ 18 F]FMISO), is the most used hypoxia-imaging agent in oncology and we have recently reported a fully automated procedure for its synthesis using the Nuclear Interface FDG module and a single neutral alumina column for purification. Using 1-(2'-nitro-1'-imidazolyl)-2-O-tetra-hydropyranyl-3-O- toluenesulfonylpropanediol (NITTP) as the precursor, we have investigated the yield of [ 18 F]FMISO using different reaction times, temperatures, and the amount of precursor. The overall yield was 48.4 ± 1.2% (n = 3), (without decay correction) obtained using 10 mg NITTP with the radio-fluorination carried out at 145 deg C for 3 min followed by acid hydrolysis for 3 min at 125 deg C in a total synthesis time of 32 ± 1 min. Increasing the precursor amount to 25 mg did not improve the overall yield under identical reaction conditions, with the decay uncorrected yield being 46.8 ± 1.6% (n = 3), but rather made the production less economical. It was also observed that the yield increased linearly with the amount of NITTP used, from 2.5 to 10 mg and plateaued from 10 to 25 mg. Radio-fluorination efficiency at four different conditions was also compared. It was also observed by radio thin layer chromatography (radio-TLC) that the duration of radio-fluorination of NITTP, not the radio-fluorination temperature favoured the formation of labeled thermally degraded product, but the single neutral alumina column purification was sufficient enough to obtain [ 18 F]FMISO devoid of any radiochemical as well as cold impurities. (author)

  9. Infrared Drying Parameter Optimization

    Science.gov (United States)

    Jackson, Matthew R.

    In recent years, much research has been done to explore direct printing methods, such as screen and inkjet printing, as alternatives to the traditional lithographic process. The primary motivation is reduction of the material costs associated with producing common electronic devices. Much of this research has focused on developing inkjet or screen paste formulations that can be printed on a variety of substrates, and which have similar conductivity performance to the materials currently used in the manufacturing of circuit boards and other electronic devices. Very little research has been done to develop a process that would use direct printing methods to manufacture electronic devices in high volumes. This study focuses on developing and optimizing a drying process for conductive copper ink in a high volume manufacturing setting. Using an infrared (IR) dryer, it was determined that conductive copper prints could be dried in seconds or minutes as opposed to tens of minutes or hours that it would take with other drying devices, such as a vacuum oven. In addition, this study also identifies significant parameters that can affect the conductivity of IR dried prints. Using designed experiments and statistical analysis; the dryer parameters were optimized to produce the best conductivity performance for a specific ink formulation and substrate combination. It was determined that for an ethylene glycol, butanol, 1-methoxy 2- propanol ink formulation printed on Kapton, the optimal drying parameters consisted of a dryer height of 4 inches, a temperature setting between 190 - 200°C, and a dry time of 50-65 seconds depending on the printed film thickness as determined by the number of print passes. It is important to note that these parameters are optimized specifically for the ink formulation and substrate used in this study. There is still much research that needs to be done into optimizing the IR dryer for different ink substrate combinations, as well as developing a

  10. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  11. Automated process safety parameters monitoring system

    International Nuclear Information System (INIS)

    Iyudina, O.S.; Solov'eva, A.G.; Syrov, A.A.

    2015-01-01

    Basing on the expertise in upgrading and creation of control systems for NPP process equipment, “Diakont” has developed the automated process safety parameters monitoring system project. The monitoring system is a set of hardware, software and data analysis tools based on a dynamic logical-and-probabilistic model of process safety. The proposed monitoring system can be used for safety monitoring and analysis of the following processes: reactor core reloading; spent nuclear fuel transfer; startup, loading, on-load operation and shutdown of an NPP turbine [ru

  12. Evaluation of GCC optimization parameters

    Directory of Open Access Journals (Sweden)

    Rodrigo D. Escobar

    2012-12-01

    Full Text Available Compile-time optimization of code can result in significant performance gains. The amount of these gains varies widely depending upon the code being optimized, the hardware being compiled for, the specific performance increase attempted (e.g. speed, throughput, memory utilization, etc. and the used compiler. We used the latest version of the SPEC CPU 2006 benchmark suite to help gain an understanding of possible performance improvements using GCC (GNU Compiler Collection options focusing mainly on speed gains made possible by tuning the compiler with the standard compiler optimization levels as well as a specific compiler option for the hardware processor. We compared the best standardized tuning options obtained for a core i7 processor, to the same relative options used on a Pentium4 to determine whether the GNU project has improved its performance tuning capabilities for specific hardware over time.

  13. Automated Cloud Observation for Ground Telescope Optimization

    Science.gov (United States)

    Lane, B.; Jeffries, M. W., Jr.; Therien, W.; Nguyen, H.

    As the number of man-made objects placed in space each year increases with advancements in commercial, academic and industry, the number of objects required to be detected, tracked, and characterized continues to grow at an exponential rate. Commercial companies, such as ExoAnalytic Solutions, have deployed ground based sensors to maintain track custody of these objects. For the ExoAnalytic Global Telescope Network (EGTN), observation of such objects are collected at the rate of over 10 million unique observations per month (as of September 2017). Currently, the EGTN does not optimally collect data on nights with significant cloud levels. However, a majority of these nights prove to be partially cloudy providing clear portions in the sky for EGTN sensors to observe. It proves useful for a telescope to utilize these clear areas to continue resident space object (RSO) observation. By dynamically updating the tasking with the varying cloud positions, the number of observations could potentially increase dramatically due to increased persistence, cadence, and revisit. This paper will discuss the recent algorithms being implemented within the EGTN, including the motivation, need, and general design. The use of automated image processing as well as various edge detection methods, including Canny, Sobel, and Marching Squares, on real-time large FOV images of the sky enhance the tasking and scheduling of a ground based telescope is discussed in Section 2. Implementations of these algorithms on single and expanding to multiple telescopes, will be explored. Results of applying these algorithms to the EGTN in real-time and comparison to non-optimized EGTN tasking is presented in Section 3. Finally, in Section 4 we explore future work in applying these throughout the EGTN as well as other optical telescopes.

  14. Parameters control in GAs for dynamic optimization

    Directory of Open Access Journals (Sweden)

    Khalid Jebari

    2013-02-01

    Full Text Available The Control of Genetic Algorithms parameters allows to optimize the search process and improves the performance of the algorithm. Moreover it releases the user to dive into a game process of trial and failure to find the optimal parameters.

  15. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  16. Automated Robust Maneuver Design and Optimization

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is seeking improvements to the current technologies related to Position, Navigation and Timing. In particular, it is desired to automate precise maneuver...

  17. Optimal caliper placement: manual vs automated methods.

    Science.gov (United States)

    Yazdi, B; Zanker, P; Wanger, P; Sonek, J; Pintoffl, K; Hoopmann, M; Kagan, K O

    2014-02-01

    To examine the inter- and intra-operator repeatability of manual placement of callipers in the assessment of basic biometric measurements and to compare the results to an automated calliper placement system. Stored ultrasound images of 95 normal fetuses between 19 and 25 weeks' gestation were used. Five operators (two experts, one resident and two students) were asked to measure the BPD, OFD and FL two times manually and automatically. For each operator, intra-operator repeatability of the manual and automated measurements was assessed by within operator standard deviation. For the assessment of the interoperator repeatability, the mean of the four manual measurements by the two experts was used as the gold standard.The relative bias of the manual measurement of the three non-expert operators and the operator-independent automated measurement were compared with the gold standard measurement by means and 95% confidence interval. In 88.4% of the 95 cases, the automated measurement algorithm was able to obtain appropriate measurements of the BPD, OFD, AC and FL. Within operator standard deviations of the manual measurements ranged between 0.15 and 1.56, irrespective of the experience of the operator.Using the automated biometric measurement system, there was no difference between the measurements of each operator. As far as the inter-operator repeatability is concerned, the difference between the manual measurements of the two students, the resident, and the gold standard was between -0.10 and 2.53 mm. The automated measurements tended to be closer to the gold standard but did not reach statistical significance. In about 90% of the cases, it was possible to obtain basic biometric measurements with an automated system. The use of automated measurements resulted in a significant improvement of the intra-operator but not of the inter-operator repeatability, but measurements were not significantly closer to the gold standard of expert examiners. This article is protected

  18. Basic MR sequence parameters systematically bias automated brain volume estimation

    International Nuclear Information System (INIS)

    Haller, Sven; Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte; Meuli, Reto; Thiran, Jean-Philippe; Krueger, Gunnar; Lovblad, Karl-Olof; Kober, Tobias

    2016-01-01

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  19. Basic MR sequence parameters systematically bias automated brain volume estimation

    Energy Technology Data Exchange (ETDEWEB)

    Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)

    2016-11-15

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  20. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  1. Optimization of Camera Parameters in Volume Intersection

    Science.gov (United States)

    Sakamoto, Sayaka; Shoji, Kenji; Toyama, Fubito; Miyamichi, Juichi

    Volume intersection is one of the simplest techniques for reconstructing 3-D shape from 2-D silhouettes. 3D shapes can be reconstructed from multiple view images by back-projecting them from the corresponding viewpoints and intersecting the resulting solid cones. The camera position and orientation (extrinsic camera parameters) of each viewpoint with respect to the object are needed to accomplish reconstruction. However, even a little variation in the camera parameters makes the reconstructed 3-D shape smaller than that with the exact parameters. The problem of optimizing camera parameters deals with determining exact ones based on multiple silhouette images and approximate ones. This paper examines attempts to optimize camera parameters by reconstructing a 3-D shape via the concept of volume intersection and then maximizing the volume of the 3-D shape. We have tested the proposed method to optimize the camera parameters using a VRML model. In experiments we apply the downhill simplex method to optimize them. The results of experiments show that the maximized volume of the reconstructed 3-D shape is one of the criteria to optimize camera parameters in camera arrangement like this experiment.

  2. Optimal Formation Trajectory-Planning Using Parameter Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hyung-Chul Lim

    2004-09-01

    Full Text Available Some methods have been presented to get optimal formation trajectories in the step of configuration or reconfiguration, which subject to constraints of collision avoidance and final configuration. In this study, a method for optimal formation trajectory-planning is introduced in view of fuel/time minimization using parameter optimization technique which has not been applied to optimal trajectory-planning for satellite formation flying. New constraints of nonlinear equality are derived for final configuration and constraints of nonlinear inequality are used for collision avoidance. The final configuration constraints are that three or more satellites should be placed in an equilateral polygon of the circular horizontal plane orbit. Several examples are given to get optimal trajectories based on the parameter optimization problem which subjects to constraints of collision avoidance and final configuration. They show that the introduced method for trajectory-planning is well suited to trajectory design problems of formation flying missions.

  3. Optimal Laser Phototherapy Parameters for Pain Relief.

    Science.gov (United States)

    Kate, Rohit J; Rubatt, Sarah; Enwemeka, Chukuka S; Huddleston, Wendy E

    2018-03-27

    Studies on laser phototherapy for pain relief have used parameters that vary widely and have reported varying outcomes. The purpose of this study was to determine the optimal parameter ranges of laser phototherapy for pain relief by analyzing data aggregated from existing primary literature. Original studies were gathered from available sources and were screened to meet the pre-established inclusion criteria. The included articles were then subjected to meta-analysis using Cohen's d statistic for determining treatment effect size. From these studies, ranges of the reported parameters that always resulted into large effect sizes were determined. These optimal ranges were evaluated for their accuracy using leave-one-article-out cross-validation procedure. A total of 96 articles met the inclusion criteria for meta-analysis and yielded 232 effect sizes. The average effect size was highly significant: d = +1.36 (confidence interval [95% CI] = 1.04-1.68). Among all the parameters, total energy was found to have the greatest effect on pain relief and had the most prominent optimal ranges of 120-162 and 15.36-20.16 J, which always resulted in large effect sizes. The cross-validation accuracy of the optimal ranges for total energy was 68.57% (95% CI = 53.19-83.97). Fewer and less-prominent optimal ranges were obtained for the energy density and duration parameters. None of the remaining parameters was found to be independently related to pain relief outcomes. The findings of meta-analysis indicate that laser phototherapy is highly effective for pain relief. Based on the analysis of parameters, total energy can be optimized to yield the largest effect on pain relief.

  4. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  5. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Automated design optimization and trade studies using STK scenarios

    Science.gov (United States)

    Rivera, Mark A.; Hill, Jennifer

    2005-05-01

    Optimizing a constellation of space, air, and ground assets is typically a man-in-the-loop intensive and iterative process. Designers must originate a few baseline concepts and then intuitively explore design variations within a large multi-dimensional trade space. To keep the search manageable, options and variations are often severely limited. There is a clear advantage to automate, in an intelligent and efficient manner, this search for optimal design solutions. Such automation would greatly open the analyzable options, increasing insight, improving solutions, and saving money. For this reason, Boeing has initiated a software application that automates STK® scenarios and intelligently searches for optimal solutions. Now in the post-prototype stages of development, "AVA" has provided extremely valuable solutions and insight into several space-based architectures and their proposed payloads. This paper will discuss the current state of the AVA tool, methods, applicability, and the potential for future growth.

  7. Automated firewall analytics design, configuration and optimization

    CERN Document Server

    Al-Shaer, Ehab

    2014-01-01

    This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author

  8. Optimization of Robotic Spray Painting process Parameters using Taguchi Method

    Science.gov (United States)

    Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar

    2018-02-01

    Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.

  9. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.

  10. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  11. Automated beam steering using optimal control

    Energy Technology Data Exchange (ETDEWEB)

    Allen, C. K. (Christopher K.)

    2004-01-01

    We present a steering algorithm which, with the aid of a model, allows the user to specify beam behavior throughout a beamline, rather than just at specified beam position monitor (BPM) locations. The model is used primarily to compute the values of the beam phase vectors from BPM measurements, and to define cost functions that describe the steering objectives. The steering problem is formulated as constrained optimization problem; however, by applying optimal control theory we can reduce it to an unconstrained optimization whose dimension is the number of control signals.

  12. Optimized and Automated design of Plasma Diagnostics for Additive Manufacture

    Science.gov (United States)

    Stuber, James; Quinley, Morgan; Melnik, Paul; Sieck, Paul; Smith, Trevor; Chun, Katherine; Woodruff, Simon

    2016-10-01

    Despite having mature designs, diagnostics are usually custom designed for each experiment. Most of the design can be now be automated to reduce costs (engineering labor, and capital cost). We present results from scripted physics modeling and parametric engineering design for common optical and mechanical components found in many plasma diagnostics and outline the process for automated design optimization that employs scripts to communicate data from online forms through proprietary and open-source CAD and FE codes to provide a design that can be sent directly to a printer. As a demonstration of design automation, an optical beam dump, baffle and optical components are designed via an automated process and printed. Supported by DOE SBIR Grant DE-SC0011858.

  13. Controller Design Automation for Aeroservoelastic Design Optimization of Wind Turbines

    NARCIS (Netherlands)

    Ashuri, T.; Van Bussel, G.J.W.; Zaayer, M.B.; Van Kuik, G.A.M.

    2010-01-01

    The purpose of this paper is to integrate the controller design of wind turbines with structure and aerodynamic analysis and use the final product in the design optimization process (DOP) of wind turbines. To do that, the controller design is automated and integrated with an aeroelastic simulation

  14. Optimization of parameters of heat exchangers vehicles

    Directory of Open Access Journals (Sweden)

    Andrei MELEKHIN

    2014-09-01

    Full Text Available The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  15. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  16. Optimization of machining parameters for green manufacturing

    Directory of Open Access Journals (Sweden)

    Y. Anand

    2016-12-01

    Full Text Available Energy crisis is affecting the world badly. While the production in developed countries stabilizes, in the developing world it continues to expand. This results in higher energy use, thereby releasing higher CO2. Thus, a pilot experiment was conducted to check and subsequently take corrective measures to reduce the energy consumption of manufacturing industry. Here, the emphasis is laid particularly on the turning operation for the cutting parameters, and effort has been made to optimize them, using Design Expert, with regard to the energy consumed. Also the optimized values, from the, for the different parameters under study have been checked and compared by those being generally used. For experimental studies, the machining was first carried on mild steel and then after aluminum and brass were also considered for study. All the values show an appreciable reduction in the energy consumption, thus reducing the carbon emission, for all the materials.

  17. Optimization of machining parameters for green manufacturing

    OpenAIRE

    Y. Anand; A. Gupta; A. Abrol; Ayush Gupta; V. Kumar; S.K. Tyagi; S. Anand

    2016-01-01

    Energy crisis is affecting the world badly. While the production in developed countries stabilizes, in the developing world it continues to expand. This results in higher energy use, thereby releasing higher CO2. Thus, a pilot experiment was conducted to check and subsequently take corrective measures to reduce the energy consumption of manufacturing industry. Here, the emphasis is laid particularly on the turning operation for the cutting parameters, and effort has been made to optimize them...

  18. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  19. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  20. A fully-automated software pipeline for integrating breast density and parenchymal texture analysis for digital mammograms: parameter optimization in a case-control breast cancer risk assessment study

    Science.gov (United States)

    Zheng, Yuanjie; Wang, Yan; Keller, Brad M.; Conant, Emily; Gee, James C.; Kontos, Despina

    2013-02-01

    Estimating a woman's risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.

  1. Composition structure and main parameters of automated radiographic complexes

    International Nuclear Information System (INIS)

    Majorov, A.N.; Akopov, V.S.; Golenishchev, I.A.; Grachev, A.V.; Zasimov, V.P.

    1977-01-01

    The function and basic parameters of separate units of an automation complex for radiographic control (ACRC) enabling to introduce the control process in the general production control circuit are discussed. The composition and structure of the ACRC are determined by the kind of an article under control (pivoted or unpivoted), by the mode of article X-raying (panoramic, partial or with a picture scanning), as well as by the type of instrument for data readout. Radiographic pictures are processed automatically by means of a system containing a drum optical and mechanical readout device, an analog-to-digital converter, a data transmitter and the ''Minsk-22'' computer. The analysis of the system capacity has shown the data volume to amount to some 1.2x10 8 bit while the data readout transmission and processing rates are 1.2x10 6 ; 0.3x10 11 and 0.7x10 4 mm 2 /h accordingly

  2. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized

  3. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  4. Optimization of Bleaching Parameters for Soybean Oil

    Directory of Open Access Journals (Sweden)

    Tomislav Domijan

    2012-01-01

    Full Text Available The final stage of edible soybean oil manufacture is refining, the most delicate phase of which is bleaching. At this step, undesirable substances are removed, such as pigments, traces of metals, phospholipids and certain degradation products. However, certain valuable compounds such as tocopherols and sterols may also be removed, significant loss of oxidative stability can occur, and fatty acid content may increase. To avoid these negative oil changes, bleaching parameters such as the concentration of bleaching clay, temperature and duration should be optimized. Since bleaching conditions depend on the properties of the bleaching clay as well as on the type of crude oil, bleaching parameters should be optimized with different types of clay for each vegetable oil. Since such optimization has not yet been reported for soybean oil treated with Pure-Flo® Supreme Pro-Active bleaching adsorbent, this study investigates the effect of bleaching parameters on bleaching efficiency, oxidative stability and the content and composition of bioactive compounds (tocopherols and sterols using the above mentioned clay in this type of oil. Results show that the amount of clay had the greatest influence on bleaching efficiency, especially according to the Lovibond scale, on transparency, and on phosphorus content. Temperature and clay amount significantly affected oxidative stability, in particular the formation of secondary oxidation products. Increasing the amount of clay decreased tocopherol content of the bleached oil. Neutralized soybean oil bleached for 20 min at 95 °C with 1 % Pure-Flo® Supreme Pro-Active bleaching clay showed the highest oxidative stability, best bleaching efficiency, and most favourable sterol content, although tocopherol content was reduced.

  5. OPTIMIZATION OF DUTY METEOROLOGIST WORK USING FORECASTER AUTOMATED WORKSTATION

    OpenAIRE

    Pylypovych, Hryhorii H.; Shevchenko, Viktor L.; Drovnin, Andrii S.; Musin, Rafil R.; Oliinyk, Oleksandr L.

    2014-01-01

    The priority directions of science and technology in Ukraine for the period till 2020 indicated information and communication technologies, so optimization of duty meteorologist work should include a number of specific measures to minimize the human factor in the chain “observation – processing – forecasting - transfer – bringing – to consumers and actual prognostic meteorological information”. Work in technical areas of activity is increasingly becoming automated, so meteorologists should be...

  6. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  7. A mixed optimization method for automated design of fuselage structures.

    Science.gov (United States)

    Sobieszczanski, J.; Loendorf, D.

    1972-01-01

    A procedure for automating the design of transport aircraft fuselage structures has been developed and implemented in the form of an operational program. The structure is designed in two stages. First, an overall distribution of structural material is obtained by means of optimality criteria to meet strength and displacement constraints. Subsequently, the detailed design of selected rings and panels consisting of skin and stringers is performed by mathematical optimization accounting for a set of realistic design constraints. The practicality and computer efficiency of the procedure is demonstrated on cylindrical and area-ruled large transport fuselages.

  8. SIFT optimization and automation for matching images from multiple temporal sources

    Science.gov (United States)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  9. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  10. Fast automated airborne electromagnetic data interpretation using parallelized particle swarm optimization

    Science.gov (United States)

    Desmarais, Jacques K.; Spiteri, Raymond J.

    2017-12-01

    A parallelized implementation of the particle swarm optimization algorithm is developed. We use the optimization procedure to speed up a previously published algorithm for airborne electromagnetic data interpretation. This algorithm is the only parametrized automated procedure for extracting the three-dimensionally varying geometrical parameters of conductors embedded in a resistive environment, such as igneous and metamorphic terranes. When compared to the original algorithm, the new optimization procedure is faster by two orders of magnitude (factor of 100). Synthetic model tests show that for the chosen system architecture and objective function, the particle swarm optimization approach depends very weakly on the rate of communication of the processors. Optimal wall-clock times are obtained using three processors. The increased performance means that the algorithm can now easily be used for fast routine interpretation of airborne electromagnetic surveys consisting of several anomalies, as is displayed by a test on MEGATEM field data collected at the Chibougamau site, Québec.

  11. Hybrid Disease Diagnosis Using Multiobjective Optimization with Evolutionary Parameter Optimization

    Science.gov (United States)

    Nalluri, MadhuSudana Rao; K., Kannan; M., Manisha

    2017-01-01

    With the widespread adoption of e-Healthcare and telemedicine applications, accurate, intelligent disease diagnosis systems have been profoundly coveted. In recent years, numerous individual machine learning-based classifiers have been proposed and tested, and the fact that a single classifier cannot effectively classify and diagnose all diseases has been almost accorded with. This has seen a number of recent research attempts to arrive at a consensus using ensemble classification techniques. In this paper, a hybrid system is proposed to diagnose ailments using optimizing individual classifier parameters for two classifier techniques, namely, support vector machine (SVM) and multilayer perceptron (MLP) technique. We employ three recent evolutionary algorithms to optimize the parameters of the classifiers above, leading to six alternative hybrid disease diagnosis systems, also referred to as hybrid intelligent systems (HISs). Multiple objectives, namely, prediction accuracy, sensitivity, and specificity, have been considered to assess the efficacy of the proposed hybrid systems with existing ones. The proposed model is evaluated on 11 benchmark datasets, and the obtained results demonstrate that our proposed hybrid diagnosis systems perform better in terms of disease prediction accuracy, sensitivity, and specificity. Pertinent statistical tests were carried out to substantiate the efficacy of the obtained results. PMID:29065626

  12. Hybrid Disease Diagnosis Using Multiobjective Optimization with Evolutionary Parameter Optimization

    Directory of Open Access Journals (Sweden)

    MadhuSudana Rao Nalluri

    2017-01-01

    Full Text Available With the widespread adoption of e-Healthcare and telemedicine applications, accurate, intelligent disease diagnosis systems have been profoundly coveted. In recent years, numerous individual machine learning-based classifiers have been proposed and tested, and the fact that a single classifier cannot effectively classify and diagnose all diseases has been almost accorded with. This has seen a number of recent research attempts to arrive at a consensus using ensemble classification techniques. In this paper, a hybrid system is proposed to diagnose ailments using optimizing individual classifier parameters for two classifier techniques, namely, support vector machine (SVM and multilayer perceptron (MLP technique. We employ three recent evolutionary algorithms to optimize the parameters of the classifiers above, leading to six alternative hybrid disease diagnosis systems, also referred to as hybrid intelligent systems (HISs. Multiple objectives, namely, prediction accuracy, sensitivity, and specificity, have been considered to assess the efficacy of the proposed hybrid systems with existing ones. The proposed model is evaluated on 11 benchmark datasets, and the obtained results demonstrate that our proposed hybrid diagnosis systems perform better in terms of disease prediction accuracy, sensitivity, and specificity. Pertinent statistical tests were carried out to substantiate the efficacy of the obtained results.

  13. Weighted Constraint Satisfaction for Smart Home Automation and Optimization

    Directory of Open Access Journals (Sweden)

    Noel Nuo Wi Tay

    2016-01-01

    Full Text Available Automation of the smart home binds together services of hardware and software to provide support for its human inhabitants. The rise of web technologies offers applicable concepts and technologies for service composition that can be exploited for automated planning of the smart home, which can be further enhanced by implementation based on service oriented architecture (SOA. SOA supports loose coupling and late binding of devices, enabling a more declarative approach in defining services and simplifying home configurations. One such declarative approach is to represent and solve automated planning through constraint satisfaction problem (CSP, which has the advantage of handling larger domains of home states. But CSP uses hard constraints and thus cannot perform optimization and handle contradictory goals and partial goal fulfillment, which are practical issues smart environments will face if humans are involved. This paper extends this approach to Weighted Constraint Satisfaction Problem (WCSP. Branch and bound depth first search is used, where its lower bound is estimated by bacterial memetic algorithm (BMA on a relaxed version of the original optimization problem. Experiments up to 16-step planning of home services demonstrate the applicability and practicality of the approach, with the inclusion of local search for trivial service combinations in BMA that produces performance enhancements. Besides, this work aims to set the groundwork for further research in the field.

  14. SAE2.py: a python script to automate parameter studies using SCREAMER with application to magnetic switching on Z

    International Nuclear Information System (INIS)

    Orndorff-Plunkett, Franklin

    2011-01-01

    The SCREAMER simulation code is widely used at Sandia National Laboratories for designing and simulating pulsed power accelerator experiments on super power accelerators. A preliminary parameter study of Z with a magnetic switching retrofit illustrates the utility of the automating script for optimizing pulsed power designs. SCREAMER is a circuit based code commonly used in pulsed-power design and requires numerous iterations to find optimal configurations. System optimization using simulations like SCREAMER is by nature inefficient and incomplete when done manually. This is especially the case when the system has many interactive elements whose emergent effects may be unforeseeable and complicated. For increased completeness, efficiency and robustness, investigators should probe a suitably confined parameter space using deterministic, genetic, cultural, ant-colony algorithms or other computational intelligence methods. I have developed SAE2 - a user-friendly, deterministic script that automates the search for optima of pulsed-power designs with SCREAMER. This manual demonstrates how to make input decks for SAE2 and optimize any pulsed-power design that can be modeled using SCREAMER. Application of SAE2 to magnetic switching on model of a potential Z refurbishment illustrates the power of SAE2. With respect to the manual optimization, the automated optimization resulted in 5% greater peak current (10% greater energy) and a 25% increase in safety factor for the most highly stressed element.

  15. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  16. Optimization of rotational arc station parameter optimized radiation therapy

    International Nuclear Information System (INIS)

    Dong, P.; Ungun, B.; Boyd, S.; Xing, L.

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  17. Optimal criteria for microscopic review of urinalysis following use of automated urine analyzer.

    Science.gov (United States)

    Khejonnit, Varanya; Pratumvinit, Busadee; Reesukumal, Kanit; Meepanya, Suriya; Pattanavin, Chanutchaya; Wongkrajang, Preechaya

    2015-01-15

    The Sysmex UX-2000 is a new, fully automated integrated urine analyzer. This device analyzes all physical and chemical characteristics of urine and sediments in urine on single platform. Because sediment analysis by fluorescent flow cytometry has limited ability to classify some formed elements present in urine (e.g., casts), laboratories should develop criteria for manual microscopic examination of urinalysis following the use of the automated urine analyzer. 399 urine samples were collected from routine workload. All samples were analyzed on the automated analyzer and were then compared to the results of the manual microscopic method to establish optimal criteria. Another set of 599 samples was then used to validate the optimized criteria. The efficiency of criteria and review rate were calculated. The false-positive and false-negative cases were enumerated and clarified. We can set 11 rules which are related to the parameters categorized by the UX-2000, including cells, casts, crystals, organisms, sperm, and flags. After optimizing the rules, the review rate was 54.1% and the false-negative rate was 2.8%. The combination of both UX-2000 and manual microscopic method obtain the best results. The UX-2000 improves efficiency by reducing the time and labor associated with the specimen analysis process. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Parameter optimization CCPP and coolant system gas turbine

    OpenAIRE

    Клер, Александр Матвеевич; Захаров, Юрий Борисович; Потанина, Юлия Михайловна

    2013-01-01

    Today most researchers optimize the parameters of cycles in combined cycle power plants without detailed calculations of the gas turbine flow path, which often involves separate optimization of the steam cycle and the gas turbine parameters, including the parameters of the gas turbine flow path that are usually known beforehand. This paper is the first to suggest a technique for coordinated optimization of combined cycle power plants, where both the parameters of the steam cycle in the combin...

  19. Implementation and optimization of automated dispensing cabinet technology.

    Science.gov (United States)

    McCarthy, Bryan C; Ferker, Michael

    2016-10-01

    A multifaceted automated dispensing cabinet (ADC) optimization initiative at a large hospital is described. The ADC optimization project, which was launched approximately six weeks after activation of ADCs in 30 patient care unit medication rooms of a newly established adult hospital, included (1) adjustment of par inventory levels (desired on-hand quantities of medications) and par reorder quantities to reduce the risk of ADC supply exhaustion and improve restocking efficiency, (2) expansion of ADC "common stock" (medications assigned to ADC inventories) to increase medication availability at the point of care, and (3) removal of some infrequently prescribed medications from ADCs to reduce the likelihood of product expiration. The purpose of the project was to address organizational concerns regarding widespread ADC medication stockouts, growing reliance on cart-fill medication delivery systems, and suboptimal medication order turnaround times. Leveraging of the ADC technology platform's reporting functionalities for enhanced inventory control yielded a number of benefits, including cost savings resulting from reduced pharmacy technician labor requirements (estimated at $2,728 annually), a substantial reduction in the overall weekly stockout percentage (from 3.2% before optimization to 0.5% eight months after optimization), an improvement in the average medication turnaround time, and estimated cost avoidance of $19,660 attributed to the reduced potential for product expiration. Efforts to optimize ADCs through par level optimization, expansion of common stock, and removal of infrequently used medications reduced pharmacy technician labor, decreased stockout percentages, generated opportunities for cost avoidance, and improved medication turnaround times. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  20. A Framework for Matching User Needs to an Optimal Level of Office Automation

    Science.gov (United States)

    1988-06-01

    This thesis introduces the concepts of determining an organization’s optimal office automation strategy by investigating seven characteristics...technology, environment, and employee skill. These seven characteristics form the input into an office automation framework which mathematically...determines which of three office automation strategies is best for a particular organization. These three strategy levels are called low level operational

  1. Comparison of haematological parameters determined by the Sysmex KX - 2IN automated haematology analyzer and the manual counts

    Directory of Open Access Journals (Sweden)

    Shu Elvis N

    2010-04-01

    Full Text Available Abstract Background This study was designed to determine the correlation between heamatological parameters by Sysmex KX-21N automated hematology analyzer with the manual methods. Method Sixty (60 subjects were randomly selected from both apparently healthy subjects and those who have different blood disorders from the University of Teaching Hospital (UNTH, Ituku-Ozalla, Enugu, Enugu State, Nigeria. Three (3mls of venous blood sample was collected aseptically from each subject into tri-potassium ethylenediamine tetra-acetic acid (K3EDTA for the analysis of haematological parameters using the automated and the manual methods. Results The blood film report by the manual method showed that 50% of the subjects were normocytic-normochromic while the other 50% revealed different abnormal blood pictures. Also, there were statistically significant differences (p Conclusion From the present study, it can be concluded that the automated hematology analyzer readings correlated well with readings by the standard manual method, although the latter method gave additional diagnostic information on the blood pictures. While patients' care and laboratory operations could be optimized by using manual microscopic examination as a reflective substitute for automated methods, usage of automated method would ease our workload and save time for patients.

  2. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  3. GA BASED GLOBAL OPTIMAL DESIGN PARAMETERS FOR ...

    African Journals Online (AJOL)

    This article uses Genetic Algorithm (GA) for the global design optimization of consecutive reactions taking place in continuous stirred tank reactors (CSTRs) connected in series. GA based optimal design determines the optimum number of CSTRs in series to achieve the maximum conversion, fractional yield and selectivity ...

  4. Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools

    Science.gov (United States)

    Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory

    2013-01-01

    Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.

  5. Automated Design and Optimization of Pebble-bed Reactor Cores

    International Nuclear Information System (INIS)

    Gougar, Hans D.; Ougouag, Abderrafi M.; Terry, William K.

    2010-01-01

    We present a conceptual design approach for high-temperature gas-cooled reactors using recirculating pebble-bed cores. The design approach employs PEBBED, a reactor physics code specifically designed to solve for and analyze the asymptotic burnup state of pebble-bed reactors, in conjunction with a genetic algorithm to obtain a core that maximizes a fitness value that is a function of user-specified parameters. The uniqueness of the asymptotic core state and the small number of independent parameters that define it suggest that core geometry and fuel cycle can be efficiently optimized toward a specified objective. PEBBED exploits a novel representation of the distribution of pebbles that enables efficient coupling of the burnup and neutron diffusion solvers. With this method, even complex pebble recirculation schemes can be expressed in terms of a few parameters that are amenable to modern optimization techniques. With PEBBED, the user chooses the type and range of core physics parameters that represent the design space. A set of traits, each with acceptable and preferred values expressed by a simple fitness function, is used to evaluate the candidate reactor cores. The stochastic search algorithm automatically drives the generation of core parameters toward the optimal core as defined by the user. The optimized design can then be modeled and analyzed in greater detail using higher resolution and more computationally demanding tools to confirm the desired characteristics. For this study, the design of pebble-bed high temperature reactor concepts subjected to demanding physical constraints demonstrated the efficacy of the PEBBED algorithm.

  6. Parameter identification using optimization techniques in the continuous simulation programs FORSIM and MACKSIM

    International Nuclear Information System (INIS)

    Carver, M.B.; Austin, C.F.; Ross, N.E.

    1980-02-01

    This report discusses the mechanics of automated parameter identification in simulation packages, and reviews available integration and optimization algorithms and their interaction within the recently developed optimization options in the FORSIM and MACKSIM simulation packages. In the MACKSIM mass-action chemical kinetics simulation package, the form and structure of the ordinary differential equations involved is known, so the implementation of an optimizing option is relatively straightforward. FORSIM, however, is designed to integrate ordinary and partial differential equations of abritrary definition. As the form of the equations is not known in advance, the design of the optimizing option is more intricate, but the philosophy could be applied to most simulation packages. In either case, however, the invocation of the optimizing interface is simple and user-oriented. Full details for the use of the optimizing mode for each program are given; specific applications are used as examples. (O.T.)

  7. Applications of the Automated SMAC Modal Parameter Extraction Package

    International Nuclear Information System (INIS)

    MAYES, RANDALL L.; DORRELL, LARRY R.; KLENKE, SCOTT E.

    1999-01-01

    An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. The new capabilities of the automated version are demonstrated on test data from a complex shell/payload system. Examples of extractions from impact and shaker data are shown. The automated algorithm extracts 30 to 50 modes in the bandwidth from each column of the frequency response function matrix. Examples of the synthesized Mode Indicator Functions (MIFs) compared with the actual MIFs show the accuracy of the technique. A data set for one input and 170 accelerometer outputs can typically be reduced in an hour. Application to a test with some complex modes is also demonstrated

  8. Increasing the Efficiency of Automation of Production Processes by Reporting the Parameters of the Parts’ Flow

    Directory of Open Access Journals (Sweden)

    Pancho Tomov

    2017-08-01

    Full Text Available In this paper are presented the analysis and the proposal to increasing the efficiency of automation of production processes by reporting the parameters of the parts’ flow. As a main focus are defined the correlation and the dependence between the input and the output parameters of the automated production process. On that basis, the contemporary requirements for development of the production process call for it to be considered as a whole, regardless of the stage in which the process automation is performed.

  9. Optimization of electrospinning parameters for chitosan nanofibres

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-06-01

    Full Text Available uniform chitosan nanofibres. The parameters studied were electric field strength, ratio of solvents - trifluoroacetic acid (TFA)/ dichloromethane (DCM), concentration of chitosan in the spinning solution, their individual and interaction effects...

  10. Physical parameter optimization by Response Surface Methodology ...

    African Journals Online (AJOL)

    Response Surface Methodology (RSM) is an empirical technique involving the use of Design Expert software to derive a predictive model similar to regression analysis. This present study explains the significant application of RSM in optimization of lipase production by Aspergillus niger. The experimental validation of the ...

  11. Automated Modal Parameter Estimation of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice

    In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...

  12. Nanohydroxyapatite synthesis using optimized process parameters ...

    Indian Academy of Sciences (India)

    ted, the solutions were vacuum filtered and washed with water and ethanol. The washed precipitates (NHA) were ... The sample was subjected to low vacuum at an accelerating voltage of 20 kV, current of 60–90 mA and ..... sidering the operability of the ultrasonication machine in the actual process, the parameters were ...

  13. Nanohydroxyapatite synthesis using optimized process parameters ...

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... In this study, nanohydroxyapatite (NHA) was synthesized using calcium nitrate tetrahydrate and diammonium hydrogen phosphate via the precipitation method assisted with ultrasonication. Three independent process parameters: temperature () (70, 80 and 90°C), ultrasonication time () (20, 25 and 30 ...

  14. Optimization of Experimental Parameters in preparing ...

    African Journals Online (AJOL)

    The anodic oxidation method has been applied to the preparation of multinanoporous TiO2 thin films. The experimental parameters, including the electrolyte nature, oxidation voltage, and oxidation time have been carefully controlled. Their influence on the structure, morphology and photocatalytic activity of the prepared ...

  15. Automated magnetic divertor design for optimal power exhaust

    International Nuclear Information System (INIS)

    Blommaert, Maarten

    2017-01-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation

  16. Automated magnetic divertor design for optimal power exhaust

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten

    2017-07-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation

  17. Automated IMRT planning with regional optimization using planning scripts.

    Science.gov (United States)

    Xhaferllari, Ilma; Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff

    2013-01-07

    Intensity-modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time-consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases.

  18. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  19. Optimizing a Drone Network to Deliver Automated External Defibrillators.

    Science.gov (United States)

    Boutilier, Justin J; Brooks, Steven C; Janmohamed, Alyf; Byers, Adam; Buick, Jason E; Zhan, Cathy; Schoellig, Angela P; Cheskes, Sheldon; Morrison, Laurie J; Chan, Timothy C Y

    2017-06-20

    Public access defibrillation programs can improve survival after out-of-hospital cardiac arrest, but automated external defibrillators (AEDs) are rarely available for bystander use at the scene. Drones are an emerging technology that can deliver an AED to the scene of an out-of-hospital cardiac arrest for bystander use. We hypothesize that a drone network designed with the aid of a mathematical model combining both optimization and queuing can reduce the time to AED arrival. We applied our model to 53 702 out-of-hospital cardiac arrests that occurred in the 8 regions of the Toronto Regional RescuNET between January 1, 2006, and December 31, 2014. Our primary analysis quantified the drone network size required to deliver an AED 1, 2, or 3 minutes faster than historical median 911 response times for each region independently. A secondary analysis quantified the reduction in drone resources required if RescuNET was treated as a large coordinated region. The region-specific analysis determined that 81 bases and 100 drones would be required to deliver an AED ahead of median 911 response times by 3 minutes. In the most urban region, the 90th percentile of the AED arrival time was reduced by 6 minutes and 43 seconds relative to historical 911 response times in the region. In the most rural region, the 90th percentile was reduced by 10 minutes and 34 seconds. A single coordinated drone network across all regions required 39.5% fewer bases and 30.0% fewer drones to achieve similar AED delivery times. An optimized drone network designed with the aid of a novel mathematical model can substantially reduce the AED delivery time to an out-of-hospital cardiac arrest event. © 2017 American Heart Association, Inc.

  20. Numerical optimization methods for controlled systems with parameters

    Science.gov (United States)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  1. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  2. Optimization of surface roughness parameters in dry turning

    OpenAIRE

    R.A. Mahdavinejad; H. Sharifi Bidgoli

    2009-01-01

    Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view...

  3. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  4. Optimal parameters of the SVM for temperature prediction

    Directory of Open Access Journals (Sweden)

    X. Shi

    2015-05-01

    Full Text Available This paper established three different optimization models in order to predict the Foping station temperature value. The dimension was reduced to change multivariate climate factors into a few variables by principal component analysis (PCA. And the parameters of support vector machine (SVM were optimized with genetic algorithm (GA, particle swarm optimization (PSO and developed genetic algorithm. The most suitable method was applied for parameter optimization by comparing the results of three different models. The results are as follows: The developed genetic algorithm optimization parameters of the predicted values were closest to the measured value after the analog trend, and it is the most fitting measured value trends, and its homing speed is relatively fast.

  5. Automated Estimation of the Orbital Parameters of Jupiter's Moons

    Science.gov (United States)

    Western, Emma; Ruch, Gerald T.

    2016-01-01

    Every semester the Physics Department at the University of St. Thomas has the Physics 104 class complete a Jupiter lab. This involves taking around twenty images of Jupiter and its moons with the telescope at the University of St. Thomas Observatory over the course of a few nights. The students then take each image and find the distance from each moon to Jupiter and plot the distances versus the elapsed time for the corresponding image. Students use the plot to fit four sinusoidal curves of the moons of Jupiter. I created a script that automates this process for the professor. It takes the list of images and creates a region file used by the students to measure the distance from the moons to Jupiter, a png image that is the graph of all the data points and the fitted curves of the four moons, and a csv file that contains the list of images, the date and time each image was taken, the elapsed time since the first image, and the distances to Jupiter for Io, Europa, Ganymede, and Callisto. This is important because it lets the professor spend more time working with the students and answering questions as opposed to spending time fitting the curves of the moons on the graph, which can be time consuming.

  6. Architecture of Automated Database Tuning Using SGA Parameters

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2012-05-01

    Full Text Available Business Data always growth from kilo byte, mega byte, giga byte, tera byte, peta byte, and so far. There is no way to avoid this increasing rate of data till business still running. Because of this issue, database tuning be critical part of a information system. Tuning a database in a cost-effective manner is a growing challenge. The total cost of ownership (TCO of information technology needs to be significantly reduced by minimizing people costs. In fact, mistakes in operations and administration of information systems are the single most reasons for system outage and unacceptable performance [3]. One way of addressing the challenge of total cost of ownership is by making information systems more self-managing. A particularly difficult piece of the ambitious vision of making database systems self-managing is the automation of database performance tuning. In this paper, we will explain the progress made thus far on this important problem. Specifically, we will propose the architecture and Algorithm for this problem.

  7. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  8. The optimal extraction parameters and anti-diabetic activity of ...

    African Journals Online (AJOL)

    diabetic activity of FIBL on alloxan induced diabetic mice were studied. The optimal extraction parameters of FIBL were obtained by single factor test and orthogonal test, as follows: ethanol concentration 60 %, ratio of solvent to raw material 30 ...

  9. Segmentation optimization and stratified object-based analysis for semi-automated geomorphological mapping

    NARCIS (Netherlands)

    Anders, N.S.; Seijmonsbergen, A.C.; Bouten, W.

    2011-01-01

    Semi-automated geomorphological mapping techniques are gradually replacing classical techniques due to increasing availability of high-quality digital topographic data. In order to efficiently analyze such large amounts of data, there is a need for optimizing the processing of automated mapping

  10. Design of an optimal automation system : Finding a balance between a human's task engagement and exhaustion

    NARCIS (Netherlands)

    Klein, Michel; van Lambalgen, Rianne

    2011-01-01

    In demanding tasks, human performance can seriously degrade as a consequence of increased workload and limited resources. In such tasks it is very important to maintain an optimal performance quality, therefore automation assistance is required. On the other hand, automation can also impose

  11. On the role of modeling parameters in IMRT plan optimization

    International Nuclear Information System (INIS)

    Krause, Michael; Scherrer, Alexander; Thieke, Christian

    2008-01-01

    The formulation of optimization problems in intensity-modulated radiotherapy (IMRT) planning comprises the choice of various values such as function-specific parameters or constraint bounds. In current inverse planning programs that yield a single treatment plan for each optimization, it is often unclear how strongly these modeling parameters affect the resulting plan. This work investigates the mathematical concepts of elasticity and sensitivity to deal with this problem. An artificial planning case with a horse-shoe formed target with different opening angles surrounding a circular risk structure is studied. As evaluation functions the generalized equivalent uniform dose (EUD) and the average underdosage below and average overdosage beyond certain dose thresholds are used. A single IMRT plan is calculated for an exemplary parameter configuration. The elasticity and sensitivity of each parameter are then calculated without re-optimization, and the results are numerically verified. The results show the following. (1) elasticity can quantify the influence of a modeling parameter on the optimization result in terms of how strongly the objective function value varies under modifications of the parameter value. It also can describe how strongly the geometry of the involved planning structures affects the optimization result. (2) Based on the current parameter settings and corresponding treatment plan, sensitivity analysis can predict the optimization result for modified parameter values without re-optimization, and it can estimate the value intervals in which such predictions are valid. In conclusion, elasticity and sensitivity can provide helpful tools in inverse IMRT planning to identify the most critical parameters of an individual planning problem and to modify their values in an appropriate way

  12. Parameter optimization toward optimal microneedle-based dermal vaccination.

    Science.gov (United States)

    van der Maaden, Koen; Varypataki, Eleni Maria; Yu, Huixin; Romeijn, Stefan; Jiskoot, Wim; Bouwstra, Joke

    2014-11-20

    Microneedle-based vaccination has several advantages over vaccination by using conventional hypodermic needles. Microneedles are used to deliver a drug into the skin in a minimally-invasive and potentially pain free manner. Besides, the skin is a potent immune organ that is highly suitable for vaccination. However, there are several factors that influence the penetration ability of the skin by microneedles and the immune responses upon microneedle-based immunization. In this study we assessed several different microneedle arrays for their ability to penetrate ex vivo human skin by using trypan blue and (fluorescently or radioactively labeled) ovalbumin. Next, these different microneedles and several factors, including the dose of ovalbumin, the effect of using an impact-insertion applicator, skin location of microneedle application, and the area of microneedle application, were tested in vivo in mice. The penetration ability and the dose of ovalbumin that is delivered into the skin were shown to be dependent on the use of an applicator and on the microneedle geometry and size of the array. Besides microneedle penetration, the above described factors influenced the immune responses upon microneedle-based vaccination in vivo. It was shown that the ovalbumin-specific antibody responses upon microneedle-based vaccination could be increased up to 12-fold when an impact-insertion applicator was used, up to 8-fold when microneedles were applied over a larger surface area, and up to 36-fold dependent on the location of microneedle application. Therefore, these influencing factors should be considered to optimize microneedle-based dermal immunization technologies. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Optimization of hydraulic turbine governor parameters based on WPA

    Science.gov (United States)

    Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao

    2018-01-01

    The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.

  14. Investigation and validation of optimal cutting parameters for least ...

    African Journals Online (AJOL)

    The cutting parameters were analyzed and optimized using Box Behnken procedure in the DESIGN EXPERT environment. The effect of process parameters with the output variable were predicted which indicates that the highest cutting speed has significant role in producing least surface roughness followed by feed and ...

  15. On the use of PGD for optimal control applied to automated fibre placement

    Science.gov (United States)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  16. Parameter optimization of electrochemical machining process using black hole algorithm

    Science.gov (United States)

    Singh, Dinesh; Shukla, Rajkamal

    2017-12-01

    Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.

  17. Helical tomotherapy optimized planning parameters for nasopharyngeal cancer

    Science.gov (United States)

    Yawichai, K.; Chitapanarux, I.; Wanwilairat, S.

    2016-03-01

    Helical TomoTherapy(HT) planning depends on optimize parameters including field width (FW), pitch factor (PF) and modulation factor (MF). These optimize parameters are effect to quality of plans and treatment time. The aim of this study was to find the optimized parameters which compromise between plan quality and treatment times. Six nasopharyngeal cancer patients were used. For each patient data set, 18 treatment plans consisted of different optimize parameters combination (FW=5.0, 2.5, 1.0 cm; PF=0.43, 0.287, 0.215; MF2.0, 3.0) were created. The identical optimization procedure followed ICRU83 recommendations. The average D50 of both parotid glands and treatment times per fraction were compared for all plans. The study show treatment plan with FW1.0 cm showed the lowest average D50 of both parotid glands. The treatment time increased inversely to FW. The FW1.0 cm the average treatment time was 4 times longer than FW5.0 cm. PF was very little influence on the average D50 of both parotid glands. Finally, MF increased from 2.0 to 3.0 the average D50 of both parotid glands was slightly decreased. However, the average treatment time was increased 22.28%. For routine nasopharyngeal cancer patients with HT, we suggest the planning optimization parameters consist of FW=5.0 cm, PF=0.43 and MF=2.0.

  18. Genetic Algorithm Optimizes Q-LAW Control Parameters

    Science.gov (United States)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  19. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  20. Topological Optimization and Automated Construction for Lightweight Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — The author proposes the development of an automated construction system for HEDS applications which will implement a game-changing material resource strategy...

  1. Optimal filtering, parameter tracking, and control of nonlinear nuclear reactors

    International Nuclear Information System (INIS)

    March-Leuba, C.; March-Leuba, J.; Perez, R.B.

    1988-01-01

    This paper presents a new formulation of a class of nonlinear optimal control problems in which the system's signals are noisy and some system parameters are changing arbitrarily with time. The methodology is validated with an application to a nonlinear nuclear reactor model. A variational technique based on Pontryagin's Maximum Principle is used to filter the noisy signals, estimate the time-varying parameters, and calculate the optimal controls. The reformulation of the variational technique as an initial value problem allows this microprocessor-based algorithm to perform on-line filtering, parameter tracking, and control

  2. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  3. Optimizing chirped laser pulse parameters for electron acceleration in vacuum

    Energy Technology Data Exchange (ETDEWEB)

    Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza; Massudi, Reza, E-mail: r-massudi@sbu.ac.ir [Laser and Plasma Research Institute, Shahid Beheshti University, Tehran 1983969411 (Iran, Islamic Republic of)

    2015-11-14

    Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.

  4. Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization

    Science.gov (United States)

    Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin

    2017-09-01

    Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.

  5. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    Science.gov (United States)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  6. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    Science.gov (United States)

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  7. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  8. Parameters Investigation of Mathematical Model of Productivity for Automated Line with Availability by DMAIC Methodology

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-01-01

    Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.

  9. Multi-parameter optimization of electrostatic micro-generators using design optimization algorithms

    International Nuclear Information System (INIS)

    Hoffmann, Daniel; Folkmer, Bernd; Manoli, Yiannos

    2010-01-01

    In this paper, the design of an electrostatic micro-generator with an in-plane area-overlap architecture is optimized in a six-dimensional parameter space using multi-parameter optimization algorithms. A parametric model is presented including four geometric and two electrical parameters. The constraints of the design parameters are discussed. The design optimization is carried out in modeFRONTIER using a genetic algorithm. The results show that the displacement limit and the number of electrode elements are essential parameters, which require optimization in the design process. The other parameters take values at the upper or lower bound of their design space. The results also demonstrate that a maximized power output will not be achieved by maximizing the capacitance change per unit displacement

  10. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  11. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    Science.gov (United States)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  12. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  13. Control parameter optimization for AP1000 reactor using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Wang, Pengfei; Wan, Jiashuang; Luo, Run; Zhao, Fuyu; Wei, Xinyu

    2016-01-01

    Highlights: • The PSO algorithm is applied for control parameter optimization of AP1000 reactor. • Key parameters of the MSHIM control system are optimized. • Optimization results are evaluated though simulations and quantitative analysis. - Abstract: The advanced mechanical shim (MSHIM) core control strategy is implemented in the AP1000 reactor for core reactivity and axial power distribution control simultaneously. The MSHIM core control system can provide superior reactor control capabilities via automatic rod control only. This enables the AP1000 to perform power change operations automatically without the soluble boron concentration adjustments. In this paper, the Particle Swarm Optimization (PSO) algorithm has been applied for the parameter optimization of the MSHIM control system to acquire better reactor control performance for AP1000. System requirements such as power control performance, control bank movement and AO control constraints are reflected in the objective function. Dynamic simulations are performed based on an AP1000 reactor simulation platform in each iteration of the optimization process to calculate the fitness values of particles in the swarm. The simulation platform is developed in Matlab/Simulink environment with implementation of a nodal core model and the MSHIM control strategy. Based on the simulation platform, the typical 10% step load decrease transient from 100% to 90% full power is simulated and the objective function used for control parameter tuning is directly incorporated in the simulation results. With successful implementation of the PSO algorithm in the control parameter optimization of AP1000 reactor, four key parameters of the MSHIM control system are optimized. It has been demonstrated by the calculation results that the optimized MSHIM control system parameters can improve the reactor power control capability and reduce the control rod movement without compromising AO control. Therefore, the PSO based optimization

  14. Review of Automated Design and Optimization of MEMS

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca

    2007-01-01

    carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated.......In recent years MEMS saw a very rapid development. Although many advances have been reached, due to the multiphysics nature of MEMS, their design is still a difficult task carried on mainly by hand calculation. In order to help to overtake such difficulties, attempts to automate MEMS design were...

  15. Complicated problem solution techniques in optimal parameter searching

    International Nuclear Information System (INIS)

    Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.

    1992-01-01

    An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs

  16. Optimization Design of Multi-Parameters in Rail Launcher System

    Directory of Open Access Journals (Sweden)

    Yujiao Zhang

    2014-05-01

    Full Text Available Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different parameters respectively. And the orthogonal test approach is used to guide the simulation for choosing the better parameter combinations, as well reduce the calculation cost. The research shows that the proposed method gives a better result in the energy efficiency of the system. To improve the energy conversion efficiency of electromagnetic rail launchers, the selection of more parameters must be considered in the design stage, such as the width, height and length of rail, the distance between rail pair, and pulse forming inductance. However, the relationship between these parameters and energy conversion efficiency cannot be directly described by one mathematical expression. So optimization methods must be applied to conduct design. In this paper, a rail launcher with five parameters was optimized by using orthogonal test method. According to the arrangement of orthogonal table, the better parameters’ combination can be obtained through less calculation. Under the condition of different parameters’ value, field and circuit simulation analysis were made. The results show that the energy conversion efficiency of the system is increased by 71.9 % after parameters optimization.

  17. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  18. Digital simulation of continuous systems with and without parameter optimization

    International Nuclear Information System (INIS)

    Gitt, W.; Herrmann, H.J.

    1977-05-01

    In addition to the simulation of steady systems the simulation system DISIOP (DIgital SImulation with OPtimization) described here, which may still be improved, enables an optimization of, at present, 6 parameters according to a criterion randomly chosen by the user. The examples given show a vast field of possible applications, from simple simulation to optimization and boundary value problems with one boundary value. Some limits of application are: 1) due to the serial working of the digital computer, real-time problem solutions are impossible; 2) in high-frequency runs, the step width number may become critical (long computing time, numerical instabilities). (orig./WB) [de

  19. An optimized routing algorithm for the automated assembly of standard multimode ribbon fibers in a full-mesh optical backplane

    Science.gov (United States)

    Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene

    2018-03-01

    In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.

  20. Routing Optimization of Intelligent Vehicle in Automated Warehouse

    Directory of Open Access Journals (Sweden)

    Yan-cong Zhou

    2014-01-01

    Full Text Available Routing optimization is a key technology in the intelligent warehouse logistics. In order to get an optimal route for warehouse intelligent vehicle, routing optimization in complex global dynamic environment is studied. A new evolutionary ant colony algorithm based on RFID and knowledge-refinement is proposed. The new algorithm gets environmental information timely through the RFID technology and updates the environment map at the same time. It adopts elite ant kept, fallback, and pheromones limitation adjustment strategy. The current optimal route in population space is optimized based on experiential knowledge. The experimental results show that the new algorithm has higher convergence speed and can jump out the U-type or V-type obstacle traps easily. It can also find the global optimal route or approximate optimal one with higher probability in the complex dynamic environment. The new algorithm is proved feasible and effective by simulation results.

  1. Statistical optimization of process parameters for the production of ...

    African Journals Online (AJOL)

    In this study, optimization of process parameters such as moisture content, incubation temperature and initial pH (fixed) for the improvement of citric acid production from oil palm empty fruit bunches through solid state bioconversion was carried out using traditional one-factor-at-a-time (OFAT) method and response surface ...

  2. Optimization of process parameters for synthesis of silica–Ni ...

    Indian Academy of Sciences (India)

    Optimization of process parameters for synthesis of silica–Ni nanocomposite by design of experiment ... Sol–gel; Ni; design of experiments; nanocomposites. ... Jadavpur University, Kolkata 700 032, India; School of Material Science and Nano-Technology, Jadavpur University, Kolkata 700 032, India; Rustech Products Pvt.

  3. Optimization of physico-chemical and nutritional parameters for ...

    African Journals Online (AJOL)

    Optimization of physico-chemical and nutritional parameters for pullulan production by a mutant of thermotolerant Aureobasidium pullulans, in fed batch ... minutes, having killing rate of 70% level, produced 6 g l-1 higher pullulan as compared to the wild type without loosing thermotolerant and non-melanin producing ability.

  4. Optimization of machining parameters of hard porcelain on a CNC ...

    African Journals Online (AJOL)

    In order to build up a relationship between quality and productivity, the present work focuses an optimized approach to establishing the multi-objective machining parameters and mathematical models for Pressure and Voltage on CNC turning machine (SINUMERIK802D). The Pressure and Voltage seem to be known as ...

  5. Optimization of CNC end milling process parameters using PCA ...

    African Journals Online (AJOL)

    Optimization of CNC end milling process parameters using PCA-based Taguchi method. ... International Journal of Engineering, Science and Technology ... To meet the basic assumption of Taguchi method; in the present work, individual response correlations have been eliminated first by means of Principal Component ...

  6. Multi responses optimization of wire EDM process parameters using ...

    African Journals Online (AJOL)

    The wire EDM was known as for its better efficiency to machining hardest material and give precise and accurate result comparing to other machining process. The intent of this experimental paper is to optimize the machining parameters of Wire Electrical Discharge Machining (WEDM) on En45A Alloy Steel with the ...

  7. Optimization of burnishing parameters and determination of select ...

    Indian Academy of Sciences (India)

    Optimization of burnishing parameters and determination of select surface characteristics in engineering materials. P RAVINDRA BABU1, K ANKAMMA2, T SIVA PRASAD3,. A V S RAJU4 and N ESWARA PRASAD5,∗. 1Mechanical Engineering Department, Gudlavalleru Engineering College,. Gudlavalleru 521 356, India.

  8. Optimization of physical and biological parameters for transient ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-08-18

    Aug 18, 2009 ... Majid and Parveez (2007) optimized different physical and biological parameters for transient expression of. GUS and GFP reporter genes in oil palm through particle bombardment. Similar experiments were also conducted on selectable markers and reporter gene expressions in banana by Sreeramanan ...

  9. The Determination of Optimal Parameters of Fuzzy PI Sugeno Controller

    Science.gov (United States)

    Kudinov, Y. I.; Kudinov, I. Yu; Volkova, A. A.; Durgarjan, I. S.; Pashchenko, F. F.

    2017-11-01

    Describe the procedure for determining by means of Matlab and Simulink optimal parameters of the fuzzy PI controller Sugeno, where some indicators of the quality of the transition process in a closed system control with this controller satisfies the specified conditions.

  10. Optimization of process parameter for synthesis of silicon quantum ...

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science; Volume 36; Issue 3. Optimization of process parameter for synthesis of silicon quantum dots using low pressure chemical vapour deposition. Dipika Barbadikar Rashmi Gautam Sanjay Sahare Rajendra Patrikar Jatin Bhatt. Volume 36 Issue 3 June 2013 pp 483-490 ...

  11. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  12. Automated sizing of large structures by mixed optimization methods

    Science.gov (United States)

    Sobieszczanski, J.; Loendorf, D.

    1973-01-01

    A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.

  13. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  14. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    Science.gov (United States)

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.

  15. Optimization of an automated FI-FT-IR procedure for the determination of o-xylene, toluene and ethyl benzene in n-hexane

    OpenAIRE

    Wells, Ian; Worsfold, Paul J.

    1999-01-01

    The development and optimization of an automated flow injection (FI) manifold coupled with a Fourier transform infrared (FT-IR) detector for the determination of toluene, ethyl benzene and o-oxylene in an n-hexane matrix is described. FT-IR parameters optimized were resolution and number of co-added scans; FI parameters optimized were type of pump tubing, carrier flow rate and sample volume. ATR and transmission flow cells were compared for the determination of o-xylene, the ATR cell was easi...

  16. The solution of private problems for optimization heat exchangers parameters

    Science.gov (United States)

    Melekhin, A.

    2017-11-01

    The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  17. Optimizing centrifugation of coagulation samples in laboratory automation.

    Science.gov (United States)

    Suchsland, Juliane; Friedrich, Nele; Grotevendt, Anne; Kallner, Anders; Lüdemann, Jan; Nauck, Matthias; Petersmann, Astrid

    2014-08-01

    High acceleration centrifugation conditions are used in laboratory automation systems to reduce the turnaround time (TAT) of clinical chemistry samples, but not of coagulation samples. This often requires separate sample flows. The CLSI guideline and manufacturers recommendations for coagulation assays aim at reducing platelet counts. For measurement of prothrombin time (PT) and activated partial thromboplastin time (APTT) platelet counts (Plt) below 200×10(9)/L are recommended. Other coagulation assays may require even lower platelet counts, e.g., less than 10 × 10(9)/L. Unifying centrifugation conditions can facilitate the integration of coagulation samples in the overall workflow of a laboratory automation system. We evaluated centrifugation conditions of coagulation samples by using high acceleration centrifugation conditions (5 min; 3280×g) in a single and two consecutive runs. RESULTS of coagulation assays [PT, APTT, coagulation factor VIII (F. VIII) and protein S] and platelet counts were compared after the first and second centrifugation. Platelet counts below 200×10(9)/L were obtained in all samples after the first centrifugation and less than 10 × 10(9)/L was obtained in 73% of the samples after a second centrifugation. Passing-Bablok regression analyses showed an equal performance of PT, APTT and F. VIII after first and second centrifugation whereas protein S measurements require a second centrifugation. Coagulation samples can be integrated into the workflow of a laboratory automation system using high acceleration centrifugation. A single centrifugation was sufficient for PT, APTT and F. VIII whereas two successive centrifugations appear to be sufficient for protein S activity.

  18. An Automated Tool for Optimizing Waste Transportation Routing and Scheduling

    International Nuclear Information System (INIS)

    Berry, L.E.; Branch, R.D.; White, H.A.; Whitehead, H. D. Jr.; Becker, B.D.

    2006-01-01

    An automated software tool has been developed and implemented to increase the efficiency and overall life-cycle productivity of site cleanup by scheduling vehicle and container movement between waste generators and disposal sites on the Department of Energy's Oak Ridge Reservation. The software tool identifies the best routes or accepts specifically requested routes and transit times, looks at fleet availability, selects the most cost effective route for each waste stream, and creates a transportation schedule in advance of waste movement. This tool was accepted by the customer and has been implemented. (authors)

  19. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  20. Automated estimator parameter selection for an IBM head/disk assembly.

    Science.gov (United States)

    Thein, May-Win L; Rendon, Thomas; Misawa, Eduardo A

    2005-07-01

    This paper presents the application of a discrete adaptive observer (DAO) to an IBM head/disk assembly system. Because of the difficulties in tuning, a genetic algorithm is implemented off-line to obtain optimal observer parameters for the DAO. Simulations show that the genetic algorithm is successful in choosing appropriate observer gains. Furthermore, as a result of these optimal gains, the observer state and parameter estimates converge accurately and quickly.

  1. Multi Objective Optimization of Flux Cored Arc Weld Parameters Using Hybrid Grey - Fuzzy Technique

    Directory of Open Access Journals (Sweden)

    M Satheesh

    2014-06-01

    Full Text Available In the present work, an attempt has been made to use the grey-based fuzzy logic method to solve correlated multiple response optimization problems in the field of flux cored arc welding. This approach converts the complex multiple objectives into a single grey-fuzzy reasoning grade. Based on the grey-fuzzy reasoning grade, optimum parameters are identified. The significant contributions of parameters are estimated using analysis of variance (ANOVA. This evaluation procedure can be used in intelligent decision making for a welding operator. The proposed and developed method has good accuracy and competency. The proposed technique provides manufacturers who develop intelligent manufacturing systems a method to facilitate the achievement of the highest level of automation.

  2. An Automated Analysis-Synthesis Package for Design Optimization ...

    African Journals Online (AJOL)

    90 standards is developed for the design optimization of framed structures - continuous beams, plane and space trusses and rigid frames, grids and composite truss-rigid frames. The package will enable the structural engineer to effectively and ...

  3. Optimization of reserve lithium thionyl chloride battery electrochemical design parameters

    Energy Technology Data Exchange (ETDEWEB)

    Doddapaneni, N.; Godshall, N.A.

    1987-01-01

    The performance of Reserve Lithium Thionyl Chloride (RLTC) batteries was optimized by conducting a parametric study of seven electrochemical parameters: electrode compression, carbon thickness, presence of catalyst, temperature, electrode limitation, discharge rate, and electrolyte acidity. Increasing electrode compression (from 0 to 15%) improved battery performance significantly (10% greater carbon capacity density). Although thinner carbon cathodes yielded less absolute capacity than did thicker cathodes, they did so with considerably higher volume efficiencies. The effect of these parameters, and their synergistic interactions, on electrochemical cell peformance is illustrated. 5 refs., 9 figs., 3 tabs.

  4. Optimization of reserve lithium thionyl chloride battery electrochemical design parameters

    Science.gov (United States)

    Doddapaneni, N.; Godshall, N. A.

    The performance of Reserve Lithium Thionyl Chloride (RLTC) batteries was optimized by conducting a parametric study of seven electrochemical parameters: electrode compression, carbon thickness, presence of catalyst, temperature, electrode limitation, discharge rate, and electrolyte acidity. Increasing electrode compression (from 0 to 15 percent) improved battery performance significantly (10 percent greater carbon capacity density). Although thinner carbon cathodes yielded less absolute capacity than did thicker cathodes, they did so with considerably higher volume efficiencies. The effect of these parameters, and their synergistic interactions, on electrochemical cell performance is illustrated.

  5. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  6. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  7. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  8. Optimization of process parameters for friction stir processing (FSP ...

    Indian Academy of Sciences (India)

    An Al-5 wt% TiC composite was processed in situ using K2TiF6 and graphite in Al melt and subjected to FSP. Processing parameters for FSP were optimized to get a defect free stir zone and homogenize the particle distribution. It was found that a rotation speed > 800 rpm is needed. A rotation speed of 1000 rpm and a ...

  9. Optimization of process parameters for friction stir processing (FSP ...

    Indian Academy of Sciences (India)

    Administrator

    An Al-5 wt% TiC composite was processed in situ using K2TiF6 and graphite in Al melt and subjected to FSP. Processing .... Optimization of process parameters for friction stir processing of Al–TiC in situ composite. 573. Table 1. FSP process ... (Model 3367) at a strain rate of 10–3 s–1. 3. Results and discussion. 3.1 XRD ...

  10. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  11. Analysis of optimization parameters in chest radiographs procedures

    International Nuclear Information System (INIS)

    Silva, Davi A.; Oliveira, Karinne M.; Alves, Douglas R.M.; Maia, Ana F.

    2009-01-01

    The risks associated with ionizing radiation became evident soon after the discovery of the X radiation. Therefore, any medical practices that make use of any type of ionizing radiation should be subjected to the basic principles of radiological protection: justification, optimization of protection and application of dose limits. In diagnostic radiology, it means to seek the lowest dose reasonably practicable, without compromising the image quality. The purpose of this project was to evaluate optimization parameters, specifically image quality, exposure levels and radiographs rejection rates, in radiological chest examinations. The image quality evaluation was performed using two forms, one for adults and other for children, based on European standards. By the results, we can conclude that the evaluated sector is not in agreement to the principle of optimization and this reality is not different from most health institutions. The entrance surface air kerma (K a,e ) results were below the national diagnostic reference levels. However, the several image quality parameters showed insufficient ratings and the film rejection rates were high. The lack of optimization generates poor quality images, causing inaccurate diagnostic reports, and increasing operating costs. Therefore, the research warns of the urgency of implementing Quality Control Assurance Program in all radiology services in the country. (author)

  12. Optimal Machining Parameters for Achieving the Desired Surface Roughness in Turning of Steel

    Directory of Open Access Journals (Sweden)

    LB Abhang

    2012-06-01

    Full Text Available Due to the widespread use of highly automated machine tools in the metal cutting industry, manufacturing requires highly reliable models and methods for the prediction of output performance in the machining process. The prediction of optimal manufacturing conditions for good surface finish and dimensional accuracy plays a very important role in process planning. In the steel turning process the tool geometry and cutting conditions determine the time and cost of production which ultimately affect the quality of the final product. In the present work, experimental investigations have been conducted to determine the effect of the tool geometry (effective tool nose radius and metal cutting conditions (cutting speed, feed rate and depth of cut on surface finish during the turning of EN-31 steel. First and second order mathematical models are developed in terms of machining parameters by using the response surface methodology on the basis of the experimental results. The surface roughness prediction model has been optimized to obtain the surface roughness values by using LINGO solver programs. LINGO is a mathematical modeling language which is used in linear and nonlinear optimization to formulate large problems concisely, solve them, and analyze the solution in engineering sciences, operation research etc. The LINGO solver program is global optimization software. It gives minimum values of surface roughness and their respective optimal conditions.

  13. FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis.

    Directory of Open Access Journals (Sweden)

    Alex D Herbert

    Full Text Available Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.

  14. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  15. A Combined Method in Parameters Optimization of Hydrocyclone

    Directory of Open Access Journals (Sweden)

    Jing-an Feng

    2016-01-01

    Full Text Available To achieve efficient separation of calcium hydroxide and impurities in carbide slag by using hydrocyclone, the physical granularity property of carbide slag, hydrocyclone operation parameters for slurry concentration, and the slurry velocity inlet are designed to be optimized. The optimization methods are combined with the Design of Experiment (DOE method and the Computational Fluid Dynamics (CFD method. Based on Design Expert software, the central composite design (CCD with three factors and five levels amounting to five groups of 20 test responses was constructed, and the experiments were performed by numerical simulation software FLUENT. Through the analysis of variance deduced from numerical simulation experiment results, the regression equations of pressure drop, overflow concentration, purity, and separation efficiencies of two solid phases were, respectively, obtained. The influences of factors were analyzed by the responses, respectively. Finally, optimized results were obtained by the multiobjective optimization method through the Design Expert software. Based on the optimized conditions, the validation test by numerical simulation and separation experiment were separately proceeded. The results proved that the combined method could be efficiently used in studying the hydrocyclone and it has a good performance in application engineering.

  16. Optimizing the response to surveillance alerts in automated surveillance systems.

    Science.gov (United States)

    Izadi, Masoumeh; Buckeridge, David L

    2011-02-28

    Although much research effort has been directed toward refining algorithms for disease outbreak alerting, considerably less attention has been given to the response to alerts generated from statistical detection algorithms. Given the inherent inaccuracy in alerting, it is imperative to develop methods that help public health personnel identify optimal policies in response to alerts. This study evaluates the application of dynamic decision making models to the problem of responding to outbreak detection methods, using anthrax surveillance as an example. Adaptive optimization through approximate dynamic programming is used to generate a policy for decision making following outbreak detection. We investigate the degree to which the model can tolerate noise theoretically, in order to keep near optimal behavior. We also evaluate the policy from our model empirically and compare it with current approaches in routine public health practice for investigating alerts. Timeliness of outbreak confirmation and total costs associated with the decisions made are used as performance measures. Using our approach, on average, 80 per cent of outbreaks were confirmed prior to the fifth day of post-attack with considerably less cost compared to response strategies currently in use. Experimental results are also provided to illustrate the robustness of the adaptive optimization approach and to show the realization of the derived error bounds in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  17. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    Science.gov (United States)

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects

  18. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  19. Optimization of vibratory welding process parameters using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Pravin Kumar; Kumar, S. Deepak; Patel, D.; Prasad, S. B. [National Institute of Technology Jamshedpur, Jharkhand (India)

    2017-05-15

    The current investigation was carried out to study the effect of vibratory welding technique on mechanical properties of 6 mm thick butt welded mild steel plates. A new concept of vibratory welding technique has been designed and developed which is capable to transfer vibrations, having resonance frequency of 300 Hz, into the molten weld pool before it solidifies during the Shielded metal arc welding (SMAW) process. The important process parameters of vibratory welding technique namely welding current, welding speed and frequency of the vibrations induced in molten weld pool were optimized using Taguchi’s analysis and Response surface methodology (RSM). The effect of process parameters on tensile strength and hardness were evaluated using optimization techniques. Applying RSM, the effect of vibratory welding parameters on tensile strength and hardness were obtained through two separate regression equations. Results showed that, the most influencing factor for the desired tensile strength and hardness is frequency at its resonance value, i.e. 300 Hz. The micro-hardness and microstructures of the vibratory welded joints were studied in detail and compared with those of conventional SMAW joints. Comparatively, uniform and fine grain structure has been found in vibratory welded joints.

  20. Determination of optimal parameters for dual-layer cathode of polymer electrolyte fuel cell using computational intelligence-aided design.

    Science.gov (United States)

    Chen, Yi; Huang, Weina; Peng, Bei

    2014-01-01

    Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference η and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs.

  1. Optimal time window for measurement of renal output parameters.

    Science.gov (United States)

    Kuyvenhoven, Jacob D; Ham, Hamphrey R; Piepsz, Amy

    2002-01-01

    Although normalised residual activity (NORA) and output efficiency (OE) are usually measured at a fixed time t, their dependency on t may affect the prediction of mean transit time (MTT). This study aimed to evaluate their degree of dependency on t and to determine an optimal time of measurement by assessment of their relationship with MTT for various times t. A simulation model generated 232 cortical renograms by convolving one plasma disappearance curve with 232 created cortical retention functions. The results show that considerable changes are observed for NORA and OE, depending on the time of measurement t. The choice of this time significantly influences the predictive value of these parameters for estimating MTT. The optimal time for measurement of NORA and OE should be close to the MTT, at the moment when emptying takes place. In the clinical practice, it should be adapted to the clinical problem under investigation.

  2. Optimization of magnetic parameters for toggle magnetoresistance random access memory

    International Nuclear Information System (INIS)

    Wang Shengyuan; Fujiwara, Hideo

    2005-01-01

    The magnetic parameters of the synthetic antiferromagnetic (SAF) elements for toggle-mode magnetoresistance random access memories (Toggle-MRAMs) have been optimized using the critical field curves obtained by analytical method with the aid of numerical calculations, to maximize the operating field margin taking into account the required memory density, storage lifetime, half-select disturb robustness, and the available strength of operating field. The control of especially low-exchange coupling strength in the SAF in addition to the increase of the operating field has been found to be essential for the development of toggle-MRAM in near future

  3. Optimization of some electrochemical etching parameters for cellulose derivatives

    International Nuclear Information System (INIS)

    Chowdhury, Annis; Gammage, R.B.

    1978-01-01

    Electrochemical etching of fast neutron induced recoil particle tracks in cellulose derivatives and other polymers provides an inexpensive and sensitive means of fast neutron personnel dosimetry. A study of the shape, clarity, and size of the tracks in Transilwrap polycarbonate indicated that the optimum normality of the potassium hydroxide etching solution is 9 N. Optimizations have also been attempted for cellulose nitrate, triacetate, and acetobutyrate with respect to such electrochemical etching parameters as frequency, voltage gradient, and concentration of the etching solution. The measurement of differential leakage currents between the undamaged and the neutron damaged foils aided in the selection of optimum frequencies. (author)

  4. Maximum gradient method for optimization of some reactor operating parameters

    International Nuclear Information System (INIS)

    Miasnikov, A.

    1976-03-01

    The method and the algorithm ensuing therefrom are described for the determination of the optimum operating state of a reactor. The optimum operating state is considered to be the extreme of the selected functional of the radial power distribution. The functional extreme is determined numerically, using a method which is one of the possible variants of the maximum gradient method. The radial distribution of the neutron absorption in regulating rods and the fuel element burnup are considered to be the variable parameters used in the optimization. (author)

  5. Power Saving Optimization for Linear Collider Interaction Region Parameters

    International Nuclear Information System (INIS)

    Seryi, Andrei

    2009-01-01

    Optimization of Interaction Region parameters of a TeV energy scale linear collider has to take into account constraints defined by phenomena such as beam-beam focusing forces, beamstrahlung radiation, and hour-glass effect. With those constraints, achieving a desired luminosity of about 2E34 would require use of e + e - beams with about 10 MW average power. Application of the 'travelling focus' regime may allow the required beam power to be reduced by at least a factor of two, helping reduce the cost of the collider, while keeping the beamstrahlung energy loss reasonably low. The technique is illustrated for the 500 GeV CM parameters of the International Linear Collider. This technique may also in principle allow recycling the e + e - beams and/or recuperation of their energy.

  6. Automation for pattern library creation and in-design optimization

    Science.gov (United States)

    Deng, Rock; Zou, Elain; Hong, Sid; Wang, Jinyan; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua; Huang, Jason

    2015-03-01

    contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.

  7. Automated inverse optimization facilitates lower doses to normal tissue in pancreatic stereotactic body radiotherapy.

    Science.gov (United States)

    Mihaylov, Ivaylo B; Mellon, Eric A; Yechieli, Raphael; Portelance, Lorraine

    2018-01-01

    Inverse planning is trial-and-error iterative process. This work introduces a fully automated inverse optimization approach, where the treatment plan is closely tailored to the unique patient anatomy. The auto-optimization is applied to pancreatic stereotactic body radiotherapy (SBRT). The automation is based on stepwise reduction of dose-volume histograms (DVHs). Five uniformly spaced points, from 1% to 70% of the organ at risk (OAR) volumes, are used. Doses to those DVH points are iteratively decreased through multiple optimization runs. With each optimization run the doses to the OARs are decreased, while the dose homogeneity over the target is increased. The iterative process is terminated when a pre-specified dose heterogeneity over the target is reached. Twelve pancreatic cases were retrospectively studied. Doses to the target, maximum doses to duodenum, bowel, stomach, and spinal cord were evaluated. In addition, mean doses to liver and kidneys were tallied. The auto-optimized plans were compared to the actual treatment plans, which are based on national protocols. The prescription dose to 95% of the planning target volume (PTV) is the same for the treatment and the auto-optimized plans. The average difference for maximum doses to duodenum, bowel, stomach, and spinal cord are -4.6 Gy, -1.8 Gy, -1.6 Gy, and -2.4 Gy respectively. The negative sign indicates lower doses with the auto-optimization. The average differences in the mean doses to liver and kidneys are -0.6 Gy, and -1.1 Gy to -1.5 Gy respectively. Automated inverse optimization holds great potential for personalization and tailoring of radiotherapy to particular patient anatomies. It can be utilized for normal tissue sparing or for an isotoxic dose escalation.

  8. EVALUATION OF ANAEMIA USING RED CELL AND RETICULOCYTE PARAMETERS USING AUTOMATED HAEMATOLOGY ANALYSER

    Directory of Open Access Journals (Sweden)

    Vidyadhar Rao

    2016-06-01

    Full Text Available Use of current models of Automated Haematology Analysers help in calculating the haemoglobin contents of the mature Red cells, Reticulocytes and percentages of Microcytic and hypochromic Red cells. This has helped the clinician in reaching early diagnosis and management of Different haemopoietic disorders like Iron Deficiency Anaemia, Thalassaemia and anaemia of chronic diseases. AIM This study is conducted using an Automated Haematology Analyser to evaluate anaemia using the Red Cell and Reticulocyte parameters. Three types of anaemia were evaluated; iron deficiency anaemia, anaemia of long duration and anaemia associated with chronic disease and Iron deficiency. MATERIALS AND METHODS The blood samples were collected from 287 adult patients with anaemia differentiated depending upon their iron status, haemoglobinopathies and inflammatory activity. Iron deficiency anaemia (n=132, anaemia of long duration (ACD, (n=97 and anaemia associated with chronic disease with iron deficiency (ACD Combi, (n=58. Microcytic Red cells, hypochromic red cells percentage and levels of haemoglobin in reticulocytes and matured RBCs were calculated. The accuracy of the parameters was analysed using receiver operating characteristic analyser to differentiate between the types of anaemia. OBSERVATIONS AND RESULTS There was no difference in parameters between the iron deficiency group or anaemia associated with chronic disease and iron deficiency. The hypochromic red cells percentage was the best parameter in differentiating anaemia of chronic disease with or without absolute iron deficiency with a sensitivity of 72.7% and a specificity of 70.4%. CONCLUSIONS The parameters of red cells and reticulocytes were of reasonably good indicators in differentiating the absolute iron deficiency anaemia with chronic disease.

  9. Optimizing object-based image analysis for semi-automated geomorphological mapping

    NARCIS (Netherlands)

    Anders, N.; Smith, M.; Seijmonsbergen, H.; Bouten, W.; Hengl, T.; Evans, I.S.; Wilson, J.P.; Gould, M.

    2011-01-01

    Object-Based Image Analysis (OBIA) is considered a useful tool for analyzing high-resolution digital terrain data. In the past, both segmentation and classification parameters were optimized manually by trial and error. We propose a method to automatically optimize classification parameters for

  10. Biohydrogen Production from Simple Carbohydrates with Optimization of Operating Parameters.

    Science.gov (United States)

    Muri, Petra; Osojnik-Črnivec, Ilja Gasan; Djinovič, Petar; Pintar, Albin

    2016-01-01

    Hydrogen could be alternative energy carrier in the future as well as source for chemical and fuel synthesis due to its high energy content, environmentally friendly technology and zero carbon emissions. In particular, conversion of organic substrates to hydrogen via dark fermentation process is of great interest. The aim of this study was fermentative hydrogen production using anaerobic mixed culture using different carbon sources (mono and disaccharides) and further optimization by varying a number of operating parameters (pH value, temperature, organic loading, mixing intensity). Among all tested mono- and disaccharides, glucose was shown as the preferred carbon source exhibiting hydrogen yield of 1.44 mol H(2)/mol glucose. Further evaluation of selected operating parameters showed that the highest hydrogen yield (1.55 mol H(2)/mol glucose) was obtained at the initial pH value of 6.4, T=37 °C and organic loading of 5 g/L. The obtained results demonstrate that lower hydrogen yield at all other conditions was associated with redirection of metabolic pathways from butyric and acetic (accompanied by H(2) production) to lactic (simultaneous H(2) production is not mandatory) acid production. These results therefore represent an important foundation for the optimization and industrial-scale production of hydrogen from organic substrates.

  11. Automated capacitive spectrometer for measuring the parameters of deep centers in semiconductor materials

    International Nuclear Information System (INIS)

    Shajmeev, S.S.

    1985-01-01

    An automated capacitive spectrometer for determining deep centers parameters in semiconductor materials and instruments is described. The facility can be used in studying electrically active defects (impurity, radiation, thermal) having deep levels in the forbidden semiconductor zone. The facility permits to determine the following parameters of the deep centers: concentration of each deep level taken separately within 5x10 -1 +-5x10 -15 of the alloying impurity concentration, level energy position in the forbidden semiconductor zone in the range from 0.08 MeV above the valency zone ceiling to 0.08 eV below the conductivity zone bottom, hole or electron capture cross-section on the deep center; concentration profile of deep levels

  12. Automated array-CGH optimized for archival formalin-fixed, paraffin-embedded tumor material

    Directory of Open Access Journals (Sweden)

    Nederlof Petra M

    2007-03-01

    Full Text Available Abstract Background Array Comparative Genomic Hybridization (aCGH is a rapidly evolving technology that still lacks complete standardization. Yet, it is of great importance to obtain robust and reproducible data to enable meaningful multiple hybridization comparisons. Special difficulties arise when aCGH is performed on archival formalin-fixed, paraffin-embedded (FFPE tissue due to its variable DNA quality. Recently, we have developed an effective DNA quality test that predicts suitability of archival samples for BAC aCGH. Methods In this report, we first used DNA from a cancer cell-line (SKBR3 to optimize the aCGH protocol for automated hybridization, and subsequently optimized and validated the procedure for FFPE breast cancer samples. We aimed for highest throughput, accuracy, and reproducibility applicable to FFPE samples, which can also be important in future diagnostic use. Results Our protocol of automated array-CGH on archival FFPE ULS-labeled DNA showed very similar results compared with published data and our previous manual hybridization method. Conclusion This report combines automated aCGH on unamplified archival FFPE DNA using non-enzymatic ULS labeling, and describes an optimized protocol for this combination resulting in improved quality and reproducibility.

  13. Parameter optimization of protein film production using microbial transglutaminase.

    Science.gov (United States)

    Patzsch, Katja; Riedel, Kristin; Pietzsch, Markus

    2010-04-12

    Sodium caseinate films were produced using microbial transglutaminase as a protein cross-linking biocatalyst. Basic parameters for the film production, such as buffer type and concentration, pH, temperature, plasticizer concentration and its influence on transglutaminase activity, mold material for film casting, specimen width, and cutting method, were investigated and compared with standardized methods (DIN EN ISO 527-3). Surprisingly, a previously described sodium phosphate buffer (50 mM, pH 8.0) resulted in crystals after drying the films for 48 h. To avoid this deteriorating effect, the buffer system was optimized and finally a Tris-HCl buffer (20 mM, pH 7.0) was chosen for the production of transparent, smooth films without crystallization. Incubation time and temperature during enzyme treatment had a considerable influence on the mechanical properties of the films.

  14. Optimization of resistance spot welding parameters for microalloyed steel sheets

    Science.gov (United States)

    Viňáš, Ján; Kaščák, Ľuboš; Greš, Miroslav

    2016-11-01

    The paper presents the results of resistance spot welding of hot-dip galvanized microalloyed steel sheets used in car body production. The spot welds were made with various welding currents and welding time values, but with a constant pressing force of welding electrodes. The welding current and welding time are the dominant characteristics in spot welding that affect the quality of spot welds, as well as their dimensions and load-bearing capacity. The load-bearing capacity of welded joints was evaluated by tensile test according to STN 05 1122 standard and dimensions and inner defects were evaluated by metallographic analysis by light optical microscope. Thewelding parameters of investigated microalloyed steel sheets were optimized for resistance spot welding on the pneumatic welding machine BPK 20.

  15. Optimal Sensor Networks Scheduling in Identification of Distributed Parameter Systems

    CERN Document Server

    Patan, Maciej

    2012-01-01

    Sensor networks have recently come into prominence because they hold the potential to revolutionize a wide spectrum of both civilian and military applications. An ingenious characteristic of sensor networks is the distributed nature of data acquisition. Therefore they seem to be ideally prepared for the task of monitoring processes with spatio-temporal dynamics which constitute one of most general and important classes of systems in modelling of the real-world phenomena. It is clear that careful deployment and activation of sensor nodes are critical for collecting the most valuable information from the observed environment. Optimal Sensor Network Scheduling in Identification of Distributed Parameter Systems discusses the characteristic features of the sensor scheduling problem, analyzes classical and recent approaches, and proposes a wide range of original solutions, especially dedicated for networks with mobile and scanning nodes. Both researchers and practitioners will find the case studies, the proposed al...

  16. OPTIMIZATION OF OPERATION PARAMETERS OF 80-KEV ELECTRON GUN

    Directory of Open Access Journals (Sweden)

    JEONG DONG KIM

    2014-06-01

    As a first step, the electron generator of an 80-keV electron gun was manufactured. In order to produce the high beam power from electron linear accelerator, a proper beam current is required form the electron generator. In this study, the beam current was measured by evaluating the performance of the electron generator. The beam current was determined by five parameters: high voltage at the electron gun, cathode voltage, pulse width, pulse amplitude, and bias voltage at the grid. From the experimental results under optimal conditions, the high voltage was determined to be 80 kV, the pulse width was 500 ns, and the cathode voltage was from 4.2 V to 4.6 V. The beam current was measured as 1.9 A at maximum. These results satisfy the beam current required for the operation of an electron linear accelerator.

  17. Microbial alkaline proteases: Optimization of production parameters and their properties

    Directory of Open Access Journals (Sweden)

    Kanupriya Miglani Sharma

    2017-06-01

    Full Text Available Proteases are hydrolytic enzymes capable of degrading proteins into small peptides and amino acids. They account for nearly 60% of the total industrial enzyme market. Proteases are extensively exploited commercially, in food, pharmaceutical, leather and detergent industry. Given their potential use, there has been renewed interest in the discovery of proteases with novel properties and a constant thrust to optimize the enzyme production. This review summarizes a fraction of the enormous reports available on various aspects of alkaline proteases. Diverse sources for isolation of alkaline protease producing microorganisms are reported. The various nutritional and environmental parameters affecting the production of alkaline proteases in submerged and solid state fermentation are described. The enzymatic and physicochemical properties of alkaline proteases from several microorganisms are discussed which can help to identify enzymes with high activity and stability over extreme pH and temperature, so that they can be developed for industrial applications.

  18. High Temperature Epoxy Foam: Optimization of Process Parameters

    Directory of Open Access Journals (Sweden)

    Samira El Gazzani

    2016-06-01

    Full Text Available For many years, reduction of fuel consumption has been a major aim in terms of both costs and environmental concerns. One option is to reduce the weight of fuel consumers. For this purpose, the use of a lightweight material based on rigid foams is a relevant choice. This paper deals with a new high temperature epoxy expanded material as substitution of phenolic resin, classified as potentially mutagenic by European directive Reach. The optimization of thermoset foam depends on two major parameters, the reticulation process and the expansion of the foaming agent. Controlling these two phenomena can lead to a fully expanded and cured material. The rheological behavior of epoxy resin is studied and gel time is determined at various temperatures. The expansion of foaming agent is investigated by thermomechanical analysis. Results are correlated and compared with samples foamed in the same temperature conditions. The ideal foaming/gelation temperature is then determined. The second part of this research concerns the optimization of curing cycle of a high temperature trifunctional epoxy resin. A two-step curing cycle was defined by considering the influence of different curing schedules on the glass transition temperature of the material. The final foamed material has a glass transition temperature of 270 °C.

  19. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  20. Laser Processing of Multilayered Thermal Spray Coatings: Optimal Processing Parameters

    Science.gov (United States)

    Tewolde, Mahder; Zhang, Tao; Lee, Hwasoo; Sampath, Sanjay; Hwang, David; Longtin, Jon

    2017-12-01

    Laser processing offers an innovative approach for the fabrication and transformation of a wide range of materials. As a rapid, non-contact, and precision material removal technology, lasers are natural tools to process thermal spray coatings. Recently, a thermoelectric generator (TEG) was fabricated using thermal spray and laser processing. The TEG device represents a multilayer, multimaterial functional thermal spray structure, with laser processing serving an essential role in its fabrication. Several unique challenges are presented when processing such multilayer coatings, and the focus of this work is on the selection of laser processing parameters for optimal feature quality and device performance. A parametric study is carried out using three short-pulse lasers, where laser power, repetition rate and processing speed are varied to determine the laser parameters that result in high-quality features. The resulting laser patterns are characterized using optical and scanning electron microscopy, energy-dispersive x-ray spectroscopy, and electrical isolation tests between patterned regions. The underlying laser interaction and material removal mechanisms that affect the feature quality are discussed. Feature quality was found to improve both by using a multiscanning approach and an optional assist gas of air or nitrogen. Electrically isolated regions were also patterned in a cylindrical test specimen.

  1. Real parameter optimization by an effective differential evolution algorithm

    Directory of Open Access Journals (Sweden)

    Ali Wagdy Mohamed

    2013-03-01

    Full Text Available This paper introduces an Effective Differential Evolution (EDE algorithm for solving real parameter optimization problems over continuous domain. The proposed algorithm proposes a new mutation rule based on the best and the worst individuals among the entire population of a particular generation. The mutation rule is combined with the basic mutation strategy through a linear decreasing probability rule. The proposed mutation rule is shown to promote local search capability of the basic DE and to make it faster. Furthermore, a random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme are merged to avoid stagnation and/or premature convergence. Additionally, the scaling factor and crossover of DE are introduced as uniform random numbers to enrich the search behavior and to enhance the diversity of the population. The effectiveness and benefits of the proposed modifications used in EDE has been experimentally investigated. Numerical experiments on a set of bound-constrained problems have shown that the new approach is efficient, effective and robust. The comparison results between the EDE and several classical differential evolution methods and state-of-the-art parameter adaptive differential evolution variants indicate that the proposed EDE algorithm is competitive with , and in some cases superior to, other algorithms in terms of final solution quality, efficiency, convergence rate, and robustness.

  2. OPTIMIZATION OF DYEING PARAMETERS TO DYE COTTON WITH CARROT EXTRACTION

    Directory of Open Access Journals (Sweden)

    MIRALLES Verónica

    2017-05-01

    Full Text Available Natural dyes derived from flora and fauna are believed to be safe because of non-toxic, non-carcinogenic and biodegradable nature. Furthermore, natural dyes do not cause pollution and waste water problems. Natural dyes as well as synthetic dyes need the optimum parameters to get a good dyeing. On some occasions, It is necessary the use of mordants to increase the affinity between cellulose fiber and natural dye, but there are other conditions to optimize in the dyeing process, like time, temperature, auxiliary porducts, etc. In addition, the optimum conditions are different depends on the type of dye and the fiber nature. The aim of this work is the use of carrot extract to dye cotton fabric by exhaustion at diverse dyeing conditions. Diffferent dyeing processes were carried out to study the effect of pH condition and the temperature, using 7, 6 and 4 pH values and 95 ºC and 130ºC for an hour. As a result some images of dyed samples are shown. Moreover, to evaluate the colour of each sample CIELAB parameters are analysed obtained by reflexion spectrophotometre. The results showed that the temperature used has an important influence on the colour of the dyed sample.

  3. Machining parameter optimization in turning process for sustainable manufacturing

    Directory of Open Access Journals (Sweden)

    S. G. Dambhare

    2015-09-01

    Full Text Available There is an increase in awareness about sustainable manufacturing process. Manufacturing industries are backbone of a country’s economy. Although it is important but there is a great concern about consumption of resources and waste creation. The primary aim of this study was to explore sustainability concern in turning process in an Indian machining industry. The effect of cutting parameters, Speed/Feed/Depth of Cut, the machining environment, Dry/MQL/Wet, and the type of cutting tool on sustainability factors under study were observed. Analysis of Variance (ANOVA was used to analyse the data obtained from experimentation in a small scale machining industry. The process is modelled mathematically using response surface methodology (RSM.The economic and environmental aspect like surface roughness, material removal rate and energy consumption were considered as sustainability factors. The model helps to understand the effect of the cutting parameters and conditions on surface finish, energy consumption, and material removal rate. The process was optimized for minimum power consumption considering environmental concern as prime importance. Studies suggest that the cutting environment and tool type influenced on the power consumption during turning process. Extended form of the proposed model could be useful to predict the environmental impact due to machining process, which would bring environmental concern into conventional machining.

  4. OPTIMIZATION OF HEMISPHERICAL RESONATOR GYROSCOPE STANDING WAVE PARAMETERS

    Directory of Open Access Journals (Sweden)

    Olga Sergeevna Khalyutina

    2017-01-01

    Full Text Available Traditionally, the problem of autonomous navigation is solved by dead reckoning navigation flight parameters (NFP of the aircraft (AC. With increasing requirements to accuracy of definition NFP improved the sensors of the prima- ry navigation information: gyroscopes and accelerometers. the gyroscopes of a new type, the so-called solid-state wave gyroscopes (SSVG are currently developed and put into practice. The work deals with the problem of increasing the accu- racy of measurements of angular velocity of the hemispherical resonator gyroscope (HRG. The reduction in the accuracy characteristics of HRG is caused by the presence of defects in the distribution of mass in the volume of its design. The syn- thesis of control system for optimal damping of the distortion parameters of the standing wave due to the influence of the mass defect resonator is adapted. The research challenge was: to examine and analytically offset the impact of the standing wave (amplitude and frequency parameters defect. Research was performed by mathematical modeling in the environment of SolidWorks Simulation for the case when the characteristics of the sensitive element of the HRG met the technological drawings of a particular type of resonator. The method of the inverse dynamics was chosen for synthesis. The research re- sults are presented in graphs the amplitude-frequency characteristics (AFC of the resonator output signal. Simulation was performed for the cases: the perfect distribution of weight; the presence of the mass defect; the presence of the mass defects are shown using the synthesized control action. Evaluating the effectiveness of the proposed control algorithm is deter- mined by the results of the resonator output signal simulation provided the perfect constructive and its performance in the presence of a mass defect in it. It is assumed that the excitation signals are standing waves in the two cases are identical in both amplitude and frequency. In this

  5. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  6. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  7. The mass movement routing tool r.randomwalk and its functionalities for parameter sensitivity analysis and optimization

    Science.gov (United States)

    Krenn, Julia; Mergili, Martin

    2016-04-01

    r.randomwalk is a GIS-based, multi-functional conceptual tool for mass movement routing. Starting from one to many release points or release areas, mass points are routed down through the digital elevation model until a defined break criterion is reached. Break criteria are defined by the user and may consist in an angle of reach or a related parameter (empirical-statistical relationships), in the drop of the flow velocity to zero (two-parameter friction model), or in the exceedance of a maximum runup height. Multiple break criteria may be combined. A constrained random walk approach is applied for the routing procedure, where the slope and the perpetuation of the flow direction determine the probability of the flow to move in a certain direction. r.randomwalk is implemented as a raster module of the GRASS GIS software and, as such, is open source. It can be obtained from http://www.mergili.at/randomwalk.html. Besides other innovative functionalities, r.randomwalk serves with built-in functionalities for the derivation of an impact indicator index (III) map with values in the range 0-1. III is derived from multiple model runs with different combinations of input parameters varied in a random or controlled way. It represents the fraction of model runs predicting an impact at a given pixel and is evaluated against the observed impact area through an ROC Plot. The related tool r.ranger facilitates the automated generation and evaluation of many III maps from a variety of sets of parameter combinations. We employ r.randomwalk and r.ranger for parameter optimization and sensitivity analysis. Thereby we do not focus on parameter values, but - accounting for the uncertainty inherent in all parameters - on parameter ranges. In this sense, we demonstrate two strategies for parameter sensitivity analysis and optimization. We avoid to (i) use one-at-a-time parameter testing which would fail to account for interdependencies of the parameters, and (ii) to explore all possible

  8. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler.

    Science.gov (United States)

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-10-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. © 2016 Society for Laboratory Automation and Screening.

  9. Automated scheme to determine design parameters for a recoverable reentry vehicle

    International Nuclear Information System (INIS)

    Williamson, W.E.

    1976-01-01

    The NRV (Nosetip Recovery Vehicle) program at Sandia Laboratories is designed to recover the nose section from a sphere cone reentry vehicle after it has flown a near ICBM reentry trajectory. Both mass jettison and parachutes are used to reduce the velocity of the RV near the end of the trajectory to a sufficiently low level that the vehicle may land intact. The design problem of determining mass jettison time and parachute deployment time in order to ensure that the vehicle does land intact is considered. The problem is formulated as a min-max optimization problem where the design parameters are to be selected to minimize the maximum possible deviation in the design criteria due to uncertainties in the system. The results of the study indicate that the optimal choice of the design parameters ensures that the maximum deviation in the design criteria is within acceptable bounds. This analytically ensures the feasibility of recovery for NRV

  10. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  11. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    Science.gov (United States)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  12. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  13. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  14. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Shadid, J.N. [Sandia National Lab., Albuquerque, NM (United States); Ng, K.T. [New Mexico State Univ., Las Cruces, NM (United States); Nadeem, A. [Univ. of Pittsburgh, PA (United States)

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  15. A choice of the parameters of NPP steam generators on the basis of vector optimization

    International Nuclear Information System (INIS)

    Lemeshev, V.U.; Metreveli, D.G.

    1981-01-01

    The optimization problem of the parameters of the designed systems is considered as the problem of multicriterion optimization. It is proposed to choose non-dominant, optimal according to Pareto, parameters. An algorithm is built on the basis of the required and sufficient non-dominant conditions to find non-dominant solutions. This algorithm has been employed to solve the problem on a choice of optimal parameters for the counterflow shell-tube steam generator of NPP of BRGD type [ru

  16. Network synthesis and parameter optimization for vehicle suspension with inerter

    Directory of Open Access Journals (Sweden)

    Long Chen

    2016-12-01

    Full Text Available In order to design a comfortable-oriented vehicle suspension structure, the network synthesis method was utilized to transfer the problem into solving a timing robust control problem and determine the structure of “inerter–spring–damper” suspension. Bilinear Matrix Inequality was utilized to obtain the timing transfer function. Then, the transfer function of suspension system can be physically implemented by passive elements such as spring, damper, and inerter. By analyzing the sensitivity and quantum genetic algorithm, the optimized parameters of inerter–spring–damper suspension were determined. A quarter-car model was established. The performance of the inerter–spring–damper suspension was verified under random input. The simulation results manifested that the dynamic performance of the proposed suspension was enhanced in contrast with traditional suspension. The root mean square of vehicle body acceleration decreases by 18.9%. The inerter–spring–damper suspension can inhibit the vertical vibration within the frequency of 1–3 Hz effectively and enhance the performance of ride comfort significantly.

  17. Optimizing gelling parameters of gellan gum for fibrocartilage tissue engineering.

    Science.gov (United States)

    Lee, Haeyeon; Fisher, Stephanie; Kallos, Michael S; Hunter, Christopher J

    2011-08-01

    Gellan gum is an attractive biomaterial for fibrocartilage tissue engineering applications because it is cell compatible, can be injected into a defect, and gels at body temperature. However, the gelling parameters of gellan gum have not yet been fully optimized. The aim of this study was to investigate the mechanics, degradation, gelling temperature, and viscosity of low acyl and low/high acyl gellan gum blends. Dynamic mechanical analysis showed that increased concentrations of low acyl gellan gum resulted in increased stiffness and the addition of high acyl gellan gum resulted in greatly decreased stiffness. Degradation studies showed that low acyl gellan gum was more stable than low/high acyl gellan gum blends. Gelling temperature studies showed that increased concentrations of low acyl gellan gum and CaCl₂ increased gelling temperature and low acyl gellan gum concentrations below 2% (w/v) would be most suitable for cell encapsulation. Gellan gum blends were generally found to have a higher gelling temperature than low acyl gellan gum. Viscosity studies showed that increased concentrations of low acyl gellan gum increased viscosity. Our results suggest that 2% (w/v) low acyl gellan gum would have the most appropriate mechanics, degradation, and gelling temperature for use in fibrocartilage tissue engineering applications. Copyright © 2011 Wiley Periodicals, Inc.

  18. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    Science.gov (United States)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  19. Optimal Design of Variable Stiffness Composite Structures using Lamination Parameters

    NARCIS (Netherlands)

    IJsselmuiden, S.T.

    2011-01-01

    Fiber reinforced composite materials have gained widespread acceptance for a multitude of applications in the aerospace, automotive, maritime and wind-energy industries. Automated fiber placement technologies have developed rapidly over the past two decades, driven primarily by a need to reduce

  20. AUTOMATION OF OPTIMAL IDENTIFICATION OF DYNAMIC ELEMENT TRANSFER FUNCTIONS IN COMPLEX TECHNICAL OBJECTS BASED ON ACCELERATION CURVES

    Directory of Open Access Journals (Sweden)

    A. Yu. Alikov

    2017-01-01

    Full Text Available Objectives. The aim of present paper is to minimise the errors in the approximation of experimentally obtained acceleration curves.Methods. Based on the features and disadvantages of the well-known Simoyu method for calculating transfer functions on the basis of acceleration curves, a modified version of the method is developed using the MathLab and MathCad software. This is based on minimising the sum of the squares of the experimental point deviations from the solution of the differential equation at the same points.Results. Methods for the implementation of parametric identification are analysed and the Simoyu method is chosen as the most effective. On the basis of the analysis of its advantages and disadvantages, a modified method is proposed that allows the structure and parameters of the transfer function to be identified according to the experimental acceleration curve, as well as the choice of optimal numerical values of those parameters obtained for minimising errors in the approximation of the experimentally obtained acceleration curves.Conclusion. The problem of optimal control over a complex technical facility was solved. On the basis of the modified Simoyu method, an algorithm for the automated selection of the optimal shape and calculation of transfer function parameters of dynamic elements of complex technical objects according to the acceleration curves in the impact channels was developed. This has allowed the calculation efficiency of the dynamic characteristics of control objects to be increased by minimising the approximation errors. The efficiency of the proposed calculation method is shown. Its simplicity makes it possible to apply to practical calculations, especially for use in the design of complex technical objects within the framework of the computer aided design system. The proposed method makes it possible to increase the accuracy of the approximation by at least 20%, which is an important advantage for its practical

  1. Mixed-integer evolution strategies for parameter optimization and their applications to medical image analysis

    NARCIS (Netherlands)

    Li, Rui

    2009-01-01

    The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of

  2. High-resolution manometry: reliability of automated analysis of upper esophageal sphincter relaxation parameters.

    Science.gov (United States)

    Lee, Tae Hee; Lee, Joon Seong; Hong, Su Jin; Lee, Ji Sung; Jeon, Seong Ran; Kim, Wan Jung; Kim, Hyun Gun; Cho, Joo Young; Kim, Jin Oh; Cho, Jun Hyung; Park, Won Young; Park, Ji Woong; Lee, Yang Gyun

    2014-10-01

    At present, automated analysis of high-resolution manometry (HRM) provides details of upper esophageal sphincter (UES) relaxation parameters. The aim of this study was to assess the accuracy of automatic analysis of UES relaxation parameters. One hundred and fifty three subjects (78 males, mean age 68.6 years, range 26-97) underwent HRM. UES relaxation parameters were interpreted twice, once visually (V) by two experts and once automatically (AS) using the ManoView ESO analysis software. Agreement between the two analysis methods was assessed using Bland-Altman plots and Lin's concordance correlation coefficient (CCC). The agreement between V and AS analyses of basal UES pressure (CCC 0.996; 95% confidence interval (CI) 0.994-0.997) and residual UES pressure (CCC 0.918; 95% CI 0.895-0.936) was good to excellent. Agreement for time to UES relaxation nadir (CCC 0.208; 95% CI 0.068-0.339) and UES relaxation duration (CCC 0.286; 95% CI 0.148-0.413) between V and AS analyses was poor. There was moderate agreement for recovery time of UES relaxation (CCC 0.522; 95% CI 0.397-0.627) and peak pharyngeal pressure (CCC 0.695; 95% CI 0.605-0.767) between V and AS analysis. AS analysis was unreliable, especially regarding the time variables of UES relaxation. Due to the difference in the clinical interpretation of pharyngoesophageal dysfunction between V and AS analysis, the use of visual analysis is justified.

  3. Practical guide for selection of1H qNMR acquisition and processing parameters confirmed by automated spectra evaluation.

    Science.gov (United States)

    Monakhova, Yulia B; Diehl, Bernd W K

    2017-11-01

    In our recent paper, a new technique for automated spectra integration and quality control of the acquired results in qNMR was developed and validated (Monakhova & Diehl, Magn. Res. Chem. 2017, doi: 10.1002/mrc.4591). The present study is focused on the influence of acquisition and postacquisition parameters on the developed automated routine in particular, and on the quantitative NMR (qNMR) results in general, which has not been undertaken previously in a systematic and automated manner. Results are presented for a number of model mixtures and authentic pharmaceutical products measured on 500- and 600-MHz NMR spectrometers. The influence of the most important acquisition (spectral width, transmitter [frequency] offset, number of scans, and time domain) and processing (size of real spectrum, deconvolution, Gaussian window multiplication, and line broadening) parameters for qNMR was automatically investigated. Moderate modification of the majority of the investigated parameters from default instrument settings within evaluated ranges does not significantly affect the trueness and precision of the qNMR. Lite Gaussian window multiplication resulted in accuracy improvement of the qNMR output and is recommended for routine measurements. In general, given that the acquisition and processing parameters were selected based on the presented guidelines, automated qNMR analysis can be employed for reproducible high-precision concentration measurements in practice. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    Science.gov (United States)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer

  5. Fuel efficiency optimization of tanker with focus on hull parameters

    Directory of Open Access Journals (Sweden)

    Pedram Edalat

    2017-06-01

    Full Text Available Fuel efficiency optimization is of crucial importance in industries. Marine transportation industry is no exception. Multi-disciplinary optimization is a branch of engineering which uses optimization methods for solving problems in which the objective function is simultaneously affected by several different factors. As one of the tools for this type of optimization, genetic algorithm has a high quality and validity. The objective of the present study is to optimize fuel efficiency in tankers. All presented equations and conditions are valid for tankers. Fuel consumption efficiency of tankers is a function of various influential factors. Given the lack of equations for describing and modeling these factors and unavailability of valid performance database for inferring the equations as well as the lack of literature in this field, the preset study includes five optimizing factors affecting the fuel consumption efficiency of a tanker in genetic algorithm by using the genetic algorithm toolbox of MATLAB software package.

  6. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    International Nuclear Information System (INIS)

    Dong, P; Xing, L; Ungun, B; Boyd, S

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  7. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Ungun, B [Stanford University School of Medicine, Stanford, CA (United States); Stanford University School of Engineering, Stanford, CA (United States); Boyd, S [Stanford University School of Engineering, Stanford, CA (United States)

    2016-06-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  8. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    Science.gov (United States)

    Suleimanov, Yury V; Green, William H

    2015-09-08

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  9. Optimization of Laser Beam Transformation Hardening by One Single Parameter

    NARCIS (Netherlands)

    Meijer, J.; van Sprang, I.

    1991-01-01

    The process of laser beam transformation hardening is principally controlled by two independent parameters, the absorbed laser power on a given area and the interaction time. These parameters can be transformed into two functional parameters: the maximum surface temperature and the hardening depth.

  10. Fractal parameters and well-logs investigation using automated well-to-well correlation

    Science.gov (United States)

    Partovi, Seyyed Mohammad Amin; Sadeghnejad, Saeid

    2017-06-01

    The aim of well-to-well correlation is to detect similar geological boundaries in two or more wells across a formation, which is usually done manually. The construction of such a correlation by hand for a field with several wells is quite complex and also time-consuming as well. The aim of this study is to speed up the well-to-well correlation process by providing an automated approach. The input data for our algorithm is the depths of all geological boundaries in a reference well. The algorithm automatically searches for similar depths associated with those geological boundaries in other wells (i.e., observation wells). The fractal parameters of well-logs, such as wavelet exponent (Hw), wavelet standard deviation exponent (Hws), and Hausdorff dimension (Ha), which are calculated by wavelet transform, are considered as pattern recognition dimensions during the well-to-well correlation. Finding the proper fractal dimensions in the automatic well-to-well correlation approach that provide the closest geological depth estimation to the results of the manual interpretation is one of the prime aims of this research. To validate the proposed technique, it is implemented on the well-log data from one of the Iranian onshore oil fields. Moreover, the capability of gamma ray, density, and sonic log in automatic detection of geological boundaries by this novel approach is also analyzed in detail. The outcome of this approach shows promising results.

  11. Automated valve fault detection based on acoustic emission parameters and support vector machine

    Directory of Open Access Journals (Sweden)

    Salah M. Ali

    2018-03-01

    Full Text Available Reciprocating compressors are one of the most used types of compressors with wide applications in industry. The most common failure in reciprocating compressors is always related to the valves. Therefore, a reliable condition monitoring method is required to avoid the unplanned shutdown in this category of machines. Acoustic emission (AE technique is one of the effective recent methods in the field of valve condition monitoring. However, a major challenge is related to the analysis of AE signal which perhaps only depends on the experience and knowledge of technicians. This paper proposes automated fault detection method using support vector machine (SVM and AE parameters in an attempt to reduce human intervention in the process. Experiments were conducted on a single stage reciprocating air compressor by combining healthy and faulty valve conditions to acquire the AE signals. Valve functioning was identified through AE waveform analysis. SVM faults detection model was subsequently devised and validated based on training and testing samples respectively. The results demonstrated automatic valve fault detection model with accuracy exceeding 98%. It is believed that valve faults can be detected efficiently without human intervention by employing the proposed model for a single stage reciprocating compressor. Keywords: Condition monitoring, Faults detection, Signal analysis, Acoustic emission, Support vector machine

  12. Global optimal hybrid geometric active contour for automated lung segmentation on CT images.

    Science.gov (United States)

    Zhang, Weihang; Wang, Xue; Zhang, Pengbo; Chen, Junfeng

    2017-12-01

    Lung segmentation on thoracic CT images plays an important role in early detection, diagnosis and 3D visualization of lung cancer. The segmentation accuracy, stability, and efficiency of serial CT scans have a significant impact on the performance of computer-aided detection. This paper proposes a global optimal hybrid geometric active contour model for automated lung segmentation on CT images. Firstly, the combination of global region and edge information leads to high segmentation accuracy in lung regions with weak boundaries or narrow bands. Secondly, due to the global optimality of energy functional, the proposed model is robust to the initial position of level set function and requires fewer iterations. Thus, the stability and efficiency of lung segmentation on serial CT slices can be greatly improved by taking advantage of the information between adjacent slices. In addition, to achieve the whole process of automated segmentation for lung cancer, two assistant algorithms based on prior shape and anatomical knowledge are proposed. The algorithms not only automatically separate the left and right lungs, but also include juxta-pleural tumors into the segmentation result. The proposed method was quantitatively validated on subjects from the publicly available LIDC-IDRI and our own data sets. Exhaustive experimental results demonstrate the superiority and competency of our method, especially compared with the typical edge-based geometric active contour model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A Comparative Experimental Study on the Use of Machine Learning Approaches for Automated Valve Monitoring Based on Acoustic Emission Parameters

    Science.gov (United States)

    Ali, Salah M.; Hui, K. H.; Hee, L. M.; Salman Leong, M.; Al-Obaidi, M. A.; Ali, Y. H.; Abdelrhman, Ahmed M.

    2018-03-01

    Acoustic emission (AE) analysis has become a vital tool for initiating the maintenance tasks in many industries. However, the analysis process and interpretation has been found to be highly dependent on the experts. Therefore, an automated monitoring method would be required to reduce the cost and time consumed in the interpretation of AE signal. This paper investigates the application of two of the most common machine learning approaches namely artificial neural network (ANN) and support vector machine (SVM) to automate the diagnosis of valve faults in reciprocating compressor based on AE signal parameters. Since the accuracy is an essential factor in any automated diagnostic system, this paper also provides a comparative study based on predictive performance of ANN and SVM. AE parameters data was acquired from single stage reciprocating air compressor with different operational and valve conditions. ANN and SVM diagnosis models were subsequently devised by combining AE parameters of different conditions. Results demonstrate that ANN and SVM models have the same results in term of prediction accuracy. However, SVM model is recommended to automate diagnose the valve condition in due to the ability of handling a high number of input features with low sampling data sets.

  14. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    Science.gov (United States)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  15. Characterization and optimized control by means of multi-parameter controllers

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Carsten; Hoeg, S.; Thoegersen, A. (Dan-Ejendomme, Hellerup (Denmark)) (and others)

    2009-07-01

    Poorly functioning HVAC systems (Heating, Ventilation and Air Conditioning), but also separate heating, ventilation and air conditioning systems are costing the Danish society billions of kroner every year: partly because of increased energy consumption and high operational and maintenance costs, but mainly due to reduced productivity and absence due to illness because of a poor indoor climate. Typically, the operation of buildings and installations takes place today with traditional build-ing automation, which is characterised by 1) being based on static considerations 2) the individual sensor being coupled with one actuator/valve, i.e. the sensor's signal is only used in one place in the system 3) subsystems often being controlled independently of each other 4) the dynamics in building constructions and systems which is very important to the system and comfort regulation is not being considered. This, coupled with the widespread tendency to use large glass areas in the facades without sufficient sun shading, means that it is difficult to optimise comfort and energy consumption. Therefore, the last 10-20 years have seen a steady increase in the complaints of the indoor climate in Danish buildings and, at the same time, new buildings often turn out to be considerably higher energy consuming than expected. The purpose of the present project is to investigate the type of multi parameter sensors which may be generated for buildings and further to carry out a preliminary evaluation on how such multi parameter controllers may be utilized for optimal control of buildings. The aim of the project isn't to develop multi parameter controllers - this requires much more effort than possible in the present project. The aim is to show the potential of using multi parameter sensors when controlling buildings. For this purpose a larger office building has been chosen - an office building with at high energy demand and complaints regarding the indoor climate. In order to

  16. Generic Protocol for Optimization of Heterologous Protein Production Using Automated Microbioreactor Technology.

    Science.gov (United States)

    Hemmerich, Johannes; Freier, Lars; Wiechert, Wolfgang; von Lieres, Eric; Oldiges, Marco

    2017-12-15

    A core business in industrial biotechnology using microbial production cell factories is the iterative process of strain engineering and optimization of bioprocess conditions. One important aspect is the improvement of cultivation medium to provide an optimal environment for microbial formation of the product of interest. It is well accepted that the media composition can dramatically influence overall bioprocess performance. Nutrition medium optimization is known to improve recombinant protein production with microbial systems and thus, this is a rewarding step in bioprocess development. However, very often standard media recipes are taken from literature, since tailor-made design of the cultivation medium is a tedious task that demands microbioreactor technology for sufficient cultivation throughput, fast product analytics, as well as support by lab robotics to enable reliability in liquid handling steps. Furthermore, advanced mathematical methods are required for rationally analyzing measurement data and efficiently designing parallel experiments such as to achieve optimal information content. The generic nature of the presented protocol allows for easy adaption to different lab equipment, other expression hosts, and target proteins of interest, as well as further bioprocess parameters. Moreover, other optimization objectives like protein production rate, specific yield, or product quality can be chosen to fit the scope of other optimization studies. The applied Kriging Toolbox (KriKit) is a general tool for Design of Experiments (DOE) that contributes to improved holistic bioprocess optimization. It also supports multi-objective optimization which can be important in optimizing both upstream and downstream processes.

  17. Generic Protocol for Optimization of Heterologous Protein Production Using Automated Microbioreactor Technology

    Science.gov (United States)

    Wiechert, Wolfgang; von Lieres, Eric; Oldiges, Marco

    2017-01-01

    A core business in industrial biotechnology using microbial production cell factories is the iterative process of strain engineering and optimization of bioprocess conditions. One important aspect is the improvement of cultivation medium to provide an optimal environment for microbial formation of the product of interest. It is well accepted that the media composition can dramatically influence overall bioprocess performance. Nutrition medium optimization is known to improve recombinant protein production with microbial systems and thus, this is a rewarding step in bioprocess development. However, very often standard media recipes are taken from literature, since tailor-made design of the cultivation medium is a tedious task that demands microbioreactor technology for sufficient cultivation throughput, fast product analytics, as well as support by lab robotics to enable reliability in liquid handling steps. Furthermore, advanced mathematical methods are required for rationally analyzing measurement data and efficiently designing parallel experiments such as to achieve optimal information content. The generic nature of the presented protocol allows for easy adaption to different lab equipment, other expression hosts, and target proteins of interest, as well as further bioprocess parameters. Moreover, other optimization objectives like protein production rate, specific yield, or product quality can be chosen to fit the scope of other optimization studies. The applied Kriging Toolbox (KriKit) is a general tool for Design of Experiments (DOE) that contributes to improved holistic bioprocess optimization. It also supports multi-objective optimization which can be important in optimizing both upstream and downstream processes. PMID:29286407

  18. SpaceScanner: COPASI wrapper for automated management of global stochastic optimization experiments.

    Science.gov (United States)

    Elsts, Atis; Pentjuss, Agris; Stalidzans, Egils

    2017-09-15

    Due to their universal applicability, global stochastic optimization methods are popular for designing improvements of biochemical networks. The drawbacks of global stochastic optimization methods are: (i) no guarantee of finding global optima, (ii) no clear optimization run termination criteria and (iii) no criteria to detect stagnation of an optimization run. The impact of these drawbacks can be partly compensated by manual work that becomes inefficient when the solution space is large due to combinatorial explosion of adjustable parameters or for other reasons. SpaceScanner uses parallel optimization runs for automatic termination of optimization tasks in case of consensus and consecutively applies a pre-defined set of global stochastic optimization methods in case of stagnation in the currently used method. Automatic scan of adjustable parameter combination subsets for best objective function values is possible with a summary file of ranked solutions. https://github.com/atiselsts/spacescanner . egils.stalidzans@lu.lv. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. Computer controlled automated assay for comprehensive studies of enzyme kinetic parameters.

    Directory of Open Access Journals (Sweden)

    Felix Bonowski

    Full Text Available Stability and biological activity of proteins is highly dependent on their physicochemical environment. The development of realistic models of biological systems necessitates quantitative information on the response to changes of external conditions like pH, salinity and concentrations of substrates and allosteric modulators. Changes in just a few variable parameters rapidly lead to large numbers of experimental conditions, which go beyond the experimental capacity of most research groups. We implemented a computer-aided experimenting framework ("robot lab assistant" that allows us to parameterize abstract, human-readable descriptions of micro-plate based experiments with variable parameters and execute them on a conventional 8 channel liquid handling robot fitted with a sensitive plate reader. A set of newly developed R-packages translates the instructions into machine commands, executes them, collects the data and processes it without user-interaction. By combining script-driven experimental planning, execution and data-analysis, our system can react to experimental outcomes autonomously, allowing outcome-based iterative experimental strategies. The framework was applied in a response-surface model based iterative optimization of buffer conditions and investigation of substrate, allosteric effector, pH and salt dependent activity profiles of pyruvate kinase (PYK. A diprotic model of enzyme kinetics was used to model the combined effects of changing pH and substrate concentrations. The 8 parameters of the model could be estimated from a single two-hour experiment using nonlinear least-squares regression. The model with the estimated parameters successfully predicted pH and PEP dependence of initial reaction rates, while the PEP concentration dependent shift of optimal pH could only be reproduced with a set of manually tweaked parameters. Differences between model-predictions and experimental observations at low pH suggest additional protonation

  20. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-01-01

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  1. Adaptive neuro-fuzzy estimation of optimal lens system parameters

    Science.gov (United States)

    Petković, Dalibor; Pavlović, Nenad T.; Shamshirband, Shahaboddin; Mat Kiah, Miss Laiha; Badrul Anuar, Nor; Idna Idris, Mohd Yamani

    2014-04-01

    Due to the popularization of digital technology, the demand for high-quality digital products has become critical. The quantitative assessment of image quality is an important consideration in any type of imaging system. Therefore, developing a design that combines the requirements of good image quality is desirable. Lens system design represents a crucial factor for good image quality. Optimization procedure is the main part of the lens system design methodology. Lens system optimization is a complex non-linear optimization task, often with intricate physical constraints, for which there is no analytical solutions. Therefore lens system design provides ideal problems for intelligent optimization algorithms. There are many tools which can be used to measure optical performance. One very useful tool is the spot diagram. The spot diagram gives an indication of the image of a point object. In this paper, one optimization criterion for lens system, the spot size radius, is considered. This paper presents new lens optimization methods based on adaptive neuro-fuzzy inference strategy (ANFIS). This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated.

  2. Parameter optimization method for the water quality dynamic model based on data-driven theory.

    Science.gov (United States)

    Liang, Shuxiu; Han, Songlin; Sun, Zhaochen

    2015-09-15

    Parameter optimization is important for developing a water quality dynamic model. In this study, we applied data-driven method to select and optimize parameters for a complex three-dimensional water quality model. First, a data-driven model was developed to train the response relationship between phytoplankton and environmental factors based on the measured data. Second, an eight-variable water quality dynamic model was established and coupled to a physical model. Parameter sensitivity analysis was investigated by changing parameter values individually in an assigned range. The above results served as guidelines for the control parameter selection and the simulated result verification. Finally, using the data-driven model to approximate the computational water quality model, we employed the Particle Swarm Optimization (PSO) algorithm to optimize the control parameters. The optimization routines and results were analyzed and discussed based on the establishment of the water quality model in Xiangshan Bay (XSB). Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  4. Optimization of Key Parameters of Energy Management Strategy for Hybrid Electric Vehicle Using DIRECT Algorithm

    Directory of Open Access Journals (Sweden)

    Jingxian Hao

    2016-11-01

    Full Text Available The rule-based logic threshold control strategy has been frequently used in energy management strategies for hybrid electric vehicles (HEVs owing to its convenience in adjusting parameters, real-time performance, stability, and robustness. However, the logic threshold control parameters cannot usually ensure the best vehicle performance at different driving cycles and conditions. For this reason, the optimization of key parameters is important to improve the fuel economy, dynamic performance, and drivability. In principle, this is a multiparameter nonlinear optimization problem. The logic threshold energy management strategy for an all-wheel-drive HEV is comprehensively analyzed and developed in this study. Seven key parameters to be optimized are extracted. The optimization model of key parameters is proposed from the perspective of fuel economy. The global optimization method, DIRECT algorithm, which has good real-time performance, low computational burden, rapid convergence, is selected to optimize the extracted key parameters globally. The results show that with the optimized parameters, the engine operates more at the high efficiency range resulting into a fuel savings of 7% compared with non-optimized parameters. The proposed method can provide guidance for calibrating the parameters of the vehicle energy management strategy from the perspective of fuel economy.

  5. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    Science.gov (United States)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  6. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  7. An Application of Taguchi Parameter Design in Predicting and Optimizing the Machining Parameters for Face Milling Operation

    Directory of Open Access Journals (Sweden)

    Krushnaraj Bodana

    2016-08-01

    Full Text Available The quality of surface finish is always an application based and higher the surface finish higher is the manufacturing cost. This paper exhibits an application of the Taguchi parameter design approach in selecting the major influencing factors on the study of face milling operation of an automobile chassis component and optimization of the same parameters for achieving required surface finish and cycle time in a CNC face milling operation. The Taguchi’s parameter design approach is an efficient trial strategy by which different parameters that are effecting the process were analyzed. An orthogonal L9 array was utilized and experiments were carried out to optimize machining parameters based on the signal to noise ratio. At last, validation tests was also conducted to verify process capability.

  8. Parameters optimization of fabric finishing system of a textile industry using teaching–learning-based optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rajiv Kumar

    2017-07-01

    Full Text Available In the present work, a recently developed advanced optimization algorithm named as teaching–learning-based optimization (TLBO is used for the parameters optimization of fabric finishing system of a textile industry. Fabric Finishing System has four main subsystems, arranged in hybrid configuration. For performance modeling and analysis of availability, a performance evaluating model of fabric finishing system has been developed with the help of mathematical formulation based on Markov-Birth-Death process using Probabilistic Approach. Then, the overall performance of the concerned system has first analyzed and then, optimized by using teaching–learning-based optimization (TLBO. The results of optimization using the proposed algorithm are validated by comparing with those obtained by using the genetic algorithm (GA on the same system. Improvement in the results is obtained by the proposed algorithm. The results of effect of variation of the algorithm parameters on fitness values of the objective function are reported.

  9. Optimizing the processing parameters for modular production of ...

    African Journals Online (AJOL)

    . For a PCB processing system, the most important processing parameters that could impart on the quality and cost of the board are temperature of processing solutions and time duration of each processing stage. An evaluation of the ...

  10. Optimization of process and solution parameters in electrospinning polyethylene oxide

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-11-01

    Full Text Available , applied voltage and polyallylamine hydrochloride (PAH) concentration in the spinning solution and its influence on nanofiber diameter. The selected parameters were varied at three levels using Box and Behnken factorial design. The interaction effect...

  11. Automated Recognition of Oceanic Cloud Patterns and its Application to Remote Sensing of Meteorological Parameters

    Science.gov (United States)

    Garand, Louis Joseph Charles

    A scheme is presented for the automated classification of oceanic cloud patterns in twenty classes. A training set is defined by 2000 samples of size 128 x 128 km taken in February 1984 over the Western Atlantic. The method uses visible and infrared images from a geostationary satellite. Class discrimination is obtained from thirteen features representing height, albedo, shape and multi-layering characteristics. Features derived from the two-dimensional power spectrum of the visible images proved essential for the detection of directional patterns (streets, rolls) and open cells. A simple classification algorithm is developed based on the assumption of multivariate normal distributions of the features. From 1020 independent samples, the consensus among three expert nephanalysts is an overall accuracy of 79% with the machine answer at least second best 89% of the time. The cloud climatology in twenty classes for January and February 1984 are compared. The physical characteristics of the classes labeled by machine are investigated from collocation of 2130 cloud patterns with ship observations. It is shown that realistic estimates of the probability of precipitation can be inferred from the cloud patterns. For several meteorological parameters, multiple linear regressions involving satellite features are used to lower the variance within a class. For example, the satellite retrieved cloud base temperature is shown to be strongly related to the surface air temperature (Ta) and dew point (TD). Single retrievals of Ta and TD have rms errors less than 3.5 K for half of the classes whereas the seasonal maps over the entire domain show rms errors of 1.45 K and 1.70 K, respectively. Cloud pattern identification also leads to estimates of wind speed and sea-air temperature and humidity difference, with rms errors on seasonal retrievals of 0.92 m/s, 1.27 K and 1.36 g/kg, respectively. Resulting rms errors on the sensible and latent heat fluxes are 26 W/m('2) and 73 W/m('2

  12. Simple Automated NGS Library Construction Using Optimized NEBNext(R) Reagents and a Caliper Sciclone

    Science.gov (United States)

    Dimalanta, E.; Stewart, F.; Barry, A.; Meek, I.; Apone, L.; Liu, P.; Munafo, D.; Davis, T.; Sumner, Christine

    2012-01-01

    While next generation sequencing technologies are continually evolving to increase the data output, sequence-ready library preparation significantly lags behind in scale. The multi-step scheme of library construction and gel-based size selection limits the number of samples that can be processed manually without introducing handling errors. Moreover, processing multiple samples is extremely time consuming. Our objective here was to address these issues by developing an automated library construction process for NGS platforms. Specifically, we optimized a library construction workflow utilizing NEBNextâ reagents in conjunction with the Sciclone NGS liquid handling workstation. In addition, specific reagent configuration designs were tested for ease-of-use. Key considerations in the design of the reagent kits included the elimination of manual pipetting steps in setting up the instrument, reagent storage compatibility, the premixing of components for the various enzymatic steps and the reduction of reagent dead-volume. As a result of this work, we have developed a cost-effective automated process that is scalable from 8-96 samples with minimal hands on time.

  13. Plug-and-play monitoring and performance optimization for industrial automation processes

    CERN Document Server

    Luo, Hao

    2017-01-01

    Dr.-Ing. Hao Luo demonstrates the developments of advanced plug-and-play (PnP) process monitoring and control systems for industrial automation processes. With aid of the so-called Youla parameterization, a novel PnP process monitoring and control architecture (PnP-PMCA) with modularized components is proposed. To validate the developments, a case study on an industrial rolling mill benchmark is performed, and the real-time implementation on a laboratory brushless DC motor is presented. Contents PnP Process Monitoring and Control Architecture Real-Time Configuration Techniques for PnP Process Monitoring Real-Time Configuration Techniques for PnP Performance Optimization Benchmark Study and Real-Time Implementation Target Groups Researchers and students of Automation and Control Engineering Practitioners in the area of Industrial and Production Engineering The Author Hao Luo received the Ph.D. degree at the Institute for Automatic Control and Complex Systems (AKS) at the University of Duisburg-Essen, Germany, ...

  14. Optimization of Injection Moulding Process Parameters in the ...

    African Journals Online (AJOL)

    ADOWIE PERE

    ABSTRACT: In this study, optimal injection moulding conditions for minimum shrinkage during moulding of. High Density Polyethylene (HDPE) were obtained by Taguchi method. The result showed that melting temperature of 190OC, injection pressure of 55 MPa, refilling pressure of 85 MPa and cooling time of 11 seconds ...

  15. Optimization of injection moulding process parameters in the ...

    African Journals Online (AJOL)

    In this study, optimal injection moulding conditions for minimum shrinkage during moulding of High Density Polyethylene (HDPE) were obtained by Taguchi method. The result showed that melting temperature of 190OC, injection pressure of 55 MPa, refilling pressure of 85 MPa and cooling time of 11 seconds gave ...

  16. Optimization of machining parameters of hard porcelain on a CNC ...

    African Journals Online (AJOL)

    (Taguchi Analysis and RSM) was efficient and effective for multi-attribute decision making problem in Hard Turning. References. Aggarwal A., Singh H., Kumar P., Singh M., 2008. Optimizing power consumption for CNC turned parts using response surface methodology, Taguchi's technique – a comparative analysis, ...

  17. Air Compressor Driving with Synchronous Motors at Optimal Parameters

    Directory of Open Access Journals (Sweden)

    Iuliu Petrica

    2010-10-01

    Full Text Available In this paper a method of optimal compensation of the reactive load by the synchronous motors, driving the air compressors, used in mining enterprises is presented, taking into account that in this case, the great majority of the equipment (compressors, pumps are generally working a constant load.

  18. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  19. SBROME: a scalable optimization and module matching framework for automated biosystems design.

    Science.gov (United States)

    Huynh, Linh; Tsoukalas, Athanasios; Köppe, Matthias; Tagkopoulos, Ilias

    2013-05-17

    The development of a scalable framework for biodesign automation is a formidable challenge given the expected increase in part availability and the ever-growing complexity of synthetic circuits. To allow for (a) the use of previously constructed and characterized circuits or modules and (b) the implementation of designs that can scale up to hundreds of nodes, we here propose a divide-and-conquer Synthetic Biology Reusable Optimization Methodology (SBROME). An abstract user-defined circuit is first transformed and matched against a module database that incorporates circuits that have previously been experimentally characterized. Then the resulting circuit is decomposed to subcircuits that are populated with the set of parts that best approximate the desired function. Finally, all subcircuits are subsequently characterized and deposited back to the module database for future reuse. We successfully applied SBROME toward two alternative designs of a modular 3-input multiplexer that utilize pre-existing logic gates and characterized biological parts.

  20. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  1. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  2. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    Science.gov (United States)

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Microbial alkaline proteases: Optimization of production parameters and their properties

    OpenAIRE

    Kanupriya Miglani Sharma; Rajesh Kumar; Surbhi Panwar; Ashwani Kumar

    2017-01-01

    Proteases are hydrolytic enzymes capable of degrading proteins into small peptides and amino acids. They account for nearly 60% of the total industrial enzyme market. Proteases are extensively exploited commercially, in food, pharmaceutical, leather and detergent industry. Given their potential use, there has been renewed interest in the discovery of proteases with novel properties and a constant thrust to optimize the enzyme production. This review summarizes a fraction of the enormous repor...

  4. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    Science.gov (United States)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.

  5. Optimization of turning process parameters by using grey-Taguchi ...

    African Journals Online (AJOL)

    The main objective of this study is to improve toughness and hardness of engineering material by changing the machining parameters of turning process. By applying Taguchi method the quality of manufactured goods, and engineering designs are developed by studying variations. In this work, an attempt has been made to ...

  6. Investigation and validation of optimal cutting parameters for least ...

    African Journals Online (AJOL)

    user

    machining the hard martensite stainless steel and indicated that the surface roughness is a critical parameter to the functionality of machined components and ... Turning is carried on lathe that provides the power to turn the work piece at a given rotational speed and feed to the cutting tool at specified rate and depth of cut.

  7. Optimization of Storage Parameters of Selected Fruits in Passive ...

    African Journals Online (AJOL)

    This study was carried out to determine the optimum storage parameters of selected fruit using three sets of four types of passive evaporative cooling structures made of two different materials clay and aluminium. One set consisted of four separate cooling chambers. Two cooling chambers were made with aluminium ...

  8. Optimal Two Parameter Bounds for the Seiffert Mean

    Directory of Open Access Journals (Sweden)

    Hui Sun

    2013-01-01

    Full Text Available We obtain sharp bounds for the Seiffert mean in terms of a two parameter family of means. Our results generalize and extend the recent bounds presented in the Journal of Inequalities and Applications (2012 and Abstract and Applied Analysis (2012.

  9. Screening for the optimal induction parameters for periplasmic ...

    African Journals Online (AJOL)

    ONOS

    2010-09-20

    Sep 20, 2010 ... 43400 UPM Serdang, Selangor, Malaysia. Accepted 5 August, 2010. Screening for optimum induction parameters to improve the production of periplasmic interferon-α2b. (PrIFN-α2b) by recombinant Escherichia coli was conducted using shake flask culture. Recombinant E. coli Rosetta-gami 2(DE3) ...

  10. Optimization of process parameters for synthesis of silica–Ni ...

    Indian Academy of Sciences (India)

    ship between the reduction of metal salts in silica powder as a function of experimental parameters, viz. temperature and time of reduction (Roy et al 2007). It was, therefore,. ∗. Author for correspondence (bduari@yahoo.co.in) thought to be worthwhile to find out such a quantitative rela- tionship and this paper deals with the ...

  11. Optimization of burnishing parameters and determination of select ...

    Indian Academy of Sciences (India)

    The present study is aimed at filling the gaps in scientific understanding of the burnishing process, and also to aid and arrive at technological solutions for the surface modifications based on burnishing of some of the commonly employed engineering materials. The effects of various burnishing parameters on the surface ...

  12. Optimizing pulsed current micro plasma arc welding parameters to ...

    African Journals Online (AJOL)

    This paper reveals the influences of pulsed current parameters namely peak current, back current, pulse and pulse width on the ultimate tensile strength of Micro Plasma Arc Welded Inconel 625 sheets. Mathematical model is developed to predict ultimate tensile strength of pulsed current micro plasma arc welded Inconel ...

  13. Optimization of dynamic MOSA model parameters using ATP/EMTP software tool

    Directory of Open Access Journals (Sweden)

    Jasika Ranko

    2017-01-01

    Full Text Available This paper demonstrates the procedure for estimating parameters of a dynamic metal-oxide surge arrester model by using a genetic algorithm, implemented in ATP/EMTP graphic preprocessor (ATPDraw optimization module. The advantages of new ATPDraw options that allow optimization of electric circuit elements are shown. The optimization process is applied to two frequency-dependent MOSA models. At the end of the work, a comparison of results obtained before and after optimization is given.

  14. Air conditioning with methane: Efficiency and economics optimization parameters

    International Nuclear Information System (INIS)

    Mastrullo, R.; Sasso, M.; Sibilio, S.; Vanoli, R.

    1992-01-01

    This paper presents an efficiency and economics evaluation method for methane fired cooling systems. Focus is on direct flame two staged absorption systems and alternative engine driven compressor sets. Comparisons are made with conventional vapour compression plants powered by electricity supplied by the national grid. A first and second law based thermodynamics analysis is made in which fuel use coefficients and exergy yields are determined. The economics analysis establishes annual energy savings, unit cooling energy production costs, payback periods and economics/efficiency optimization curves useful for preliminary feasibility studies

  15. Optimal parameters of monolithic high-contrast grating mirrors.

    Science.gov (United States)

    Marciniak, Magdalena; Gębski, Marcin; Dems, Maciej; Haglund, Erik; Larsson, Anders; Riaziat, Majid; Lott, James A; Czyszanowski, Tomasz

    2016-08-01

    In this Letter a fully vectorial numerical model is used to search for the construction parameters of monolithic high-contrast grating (MHCG) mirrors providing maximal power reflectance. We determine the design parameters of highly reflecting MHCG mirrors where the etching depth of the stripes is less than two wavelengths in free space. We analyze MHCGs in a broad range of real refractive index values corresponding to most of the common optoelectronic materials in use today. Our results comprise a complete image of possible highly reflecting MHCG mirror constructions for potential use in optoelectronic devices and systems. We support the numerical analysis by experimental verification of the high reflectance via a GaAs MHCG designed for a wavelength of 980 nm.

  16. Method for Predicting and Optimizing System Parameters for Electrospinning System

    Science.gov (United States)

    Wincheski, Russell A. (Inventor)

    2011-01-01

    An electrospinning system using a spinneret and a counter electrode is first operated for a fixed amount of time at known system and operational parameters to generate a fiber mat having a measured fiber mat width associated therewith. Next, acceleration of the fiberizable material at the spinneret is modeled to determine values of mass, drag, and surface tension associated with the fiberizable material at the spinneret output. The model is then applied in an inversion process to generate predicted values of an electric charge at the spinneret output and an electric field between the spinneret and electrode required to fabricate a selected fiber mat design. The electric charge and electric field are indicative of design values for system and operational parameters needed to fabricate the selected fiber mat design.

  17. Machining parameter optimization in turning process for sustainable manufacturing

    OpenAIRE

    S. G. Dambhare; S. J. Deshmukh; A. B. Borade

    2015-01-01

    There is an increase in awareness about sustainable manufacturing process. Manufacturing industries are backbone of a country’s economy. Although it is important but there is a great concern about consumption of resources and waste creation. The primary aim of this study was to explore sustainability concern in turning process in an Indian machining industry. The effect of cutting parameters, Speed/Feed/Depth of Cut, the machining environment, Dry/MQL/Wet, and the type of cutting tool on sust...

  18. Optimization of process parameters through GRA, TOPSIS and RSA models

    Directory of Open Access Journals (Sweden)

    Suresh Nipanikar

    2018-01-01

    Full Text Available This article investigates the effect of cutting parameters on the surface roughness and flank wear during machining of titanium alloy Ti-6Al-4V ELI( Extra Low Interstitial in minimum quantity lubrication environment by using PVD TiAlN insert. Full factorial design of experiment was used for the machining 2 factors 3 levels and 2 factors 2 levels. Turning parameters studied were cutting speed (50, 65, 80 m/min, feed (0.08, 0.15, 0.2 mm/rev and depth of cut 0.5 mm constant. The results show that 44.61 % contribution of feed and 43.57 % contribution of cutting speed on surface roughness also 53.16 % contribution of cutting tool and 26.47 % contribution of cutting speed on tool flank wear. Grey relational analysis and TOPSIS method suggest the optimum combinations of machining parameters as cutting speed: 50 m/min, feed: 0.8 mm/rev., cutting tool: PVD TiAlN, cutting fluid: Palm oi

  19. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-14

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  20. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  1. Design And Modeling An Automated Digsilent Power System For Optimal New Load Locations

    Directory of Open Access Journals (Sweden)

    Mohamed Saad

    2015-08-01

    Full Text Available Abstract The electric power utilities seek to take advantage of novel approaches to meet growing energy demand. Utilities are under pressure to evolve their classical topologies to increase the usage of distributed generation. Currently the electrical power engineers in many regions of the world are implementing manual methods to measure power consumption for farther assessment of voltage violation. Such process proved to be time consuming costly and inaccurate. Also demand response is a grid management technique where retail or wholesale customers are requested either electronically or manually to reduce their load. Therefore this paper aims to design and model an automated power system for optimal new load locations using DPL DIgSILENT Programming Language. This study is a diagnostic approach that assists system operator about any voltage violation cases that would happen during adding new load to the grid. The process of identifying the optimal bus bar location involves a complicated calculation of the power consumptions at each load bus As a result the DPL program would consider all the IEEE 30 bus internal networks data then a load flow simulation will be executed. To add the new load to the first bus in the network. Therefore the developed model will simulate the new load at each available bus bar in the network and generate three analytical reports for each case that captures the overunder voltage and the loading elements among the grid.

  2. The Study of the Optimal Parameter Settings in a Hospital Supply Chain System in Taiwan

    Directory of Open Access Journals (Sweden)

    Hung-Chang Liao

    2014-01-01

    Full Text Available This study proposed the optimal parameter settings for the hospital supply chain system (HSCS when either the total system cost (TSC or patient safety level (PSL (or both simultaneously was considered as the measure of the HSCS’s performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM was used to construct the regression model, and a genetic algorithm (GA was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers.

  3. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  4. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    Science.gov (United States)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  5. Optimization of design parameters of low-energy buildings

    Science.gov (United States)

    Vala, Jiří; Jarošová, Petra

    2017-07-01

    Evaluation of temperature development and related consumption of energy required for heating, air-conditioning, etc. in low-energy buildings requires the proper physical analysis, covering heat conduction, convection and radiation, including beam and diffusive components of solar radiation, on all building parts and interfaces. The system approach and the Fourier multiplicative decomposition together with the finite element technique offers the possibility of inexpensive and robust numerical and computational analysis of corresponding direct problems, as well as of the optimization ones with several design variables, using the Nelder-Mead simplex method. The practical example demonstrates the correlation between such numerical simulations and the time series of measurements of energy consumption on a small family house in Ostrov u Macochy (35 km northern from Brno).

  6. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    Science.gov (United States)

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  7. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  8. Parameters and design optimization of the ring piezoelectric ceramic transformer

    Directory of Open Access Journals (Sweden)

    Jiří Erhart

    2015-09-01

    Full Text Available Main aim of the presented paper is the theoretical analysis and experimental verification of the transformation parameters for the new type of nonhomogeneously poled ring transformer. The input part is poled in the thickness direction and output part in the radial direction. Two transformer geometries are studied — the input part is at inner ring segment, or it is at the outer ring segment. The optimum electrode size aspect ratios have been found experimentally as d1∕D≈0.60−0.65 for the ring with aspect ratio d∕D=0.2. The fundamental as well as higher overtone resonances were studied for the transformation ratio, the optimum resistive load, efficiency and no-load transformation ratio. Higher overtones have better transformation parameters compared to the fundamental resonance. The new type ring transformer exhibits very high transformation ratios up to 200 under no-load and up to 13.4 under a high efficiency of 97% at the optimum load conditions of 10 kΩ. Strong electric field gradient at the output circuit is applicable for the electrical discharge generation.

  9. Beyond bixels: Generalizing the optimization parameters for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Markman, Jerry; Low, Daniel A.; Beavis, Andrew W.; Deasy, Joseph O.

    2002-01-01

    Intensity modulated radiation therapy (IMRT) treatment planning systems optimize fluence distributions by subdividing the fluence distribution into rectangular bixels. The algorithms typically optimize the fluence intensity directly, often leading to fluence distributions with sharp discontinuities. These discontinuities may yield difficulties in delivery of the fluence distribution, leading to inaccurate dose delivery. We have developed a method for decoupling the bixel intensities from the optimization parameters; either by introducing optimization control points from which the bixel intensities are interpolated or by parametrizing the fluence distribution using basis functions. In either case, the number of optimization search parameters is reduced from the direct bixel optimization method. To illustrate the concept, the technique is applied to two-dimensional idealized head and neck treatment plans. The interpolation algorithms investigated were nearest-neighbor, linear and cubic spline, and radial basis functions serve as the basis function test. The interpolation and basis function optimization techniques were compared against the direct bixel calculation. The number of optimization parameters were significantly reduced relative to the bixel optimization, and this was evident in the reduction of computation time of as much as 58% from the full bixel optimization. The dose distributions obtained using the reduced optimization parameter sets were very similar to the full bixel optimization when examined by dose distributions, statistics, and dose-volume histograms. To evaluate the sensitivity of the fluence calculations to spatial misalignment caused either by delivery errors or patient motion, the doses were recomputed with a 1 mm shift in each beam and compared to the unshifted distributions. Except for the nearest-neighbor algorithm, the reduced optimization parameter dose distributions were generally less sensitive to spatial shifts than the bixel

  10. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    Science.gov (United States)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  11. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    International Nuclear Information System (INIS)

    Jayalal, M.L.; Kumar, L. Satish; Jehadeesan, R.; Rajeswari, S.; Satya Murty, S.A.V.; Balasubramaniyan, V.; Chetal, S.C.

    2011-01-01

    Highlights: → We model design optimization of a vital reactor component using Genetic Algorithm. → Real-parameter Genetic Algorithm is used for steam condenser optimization study. → Comparison analysis done with various Genetic Algorithm related mechanisms. → The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  12. Metallic Fuel Casting Development and Parameter Optimization Simulations

    International Nuclear Information System (INIS)

    Fielding, Randall S.; Kennedy, J.R.; Crapps, J.; Unal, C.

    2013-01-01

    Conclusions: • Gravity casting is a feasible process for casting of metallic fuels: – May not be as robust as CGIC, more parameter dependent to find right “sweet spot” for high quality castings; – Fluid flow is very important and is affected by mold design, vent size, super heat, etc.; – Pressure differential assist was found to be detrimental. • Simulation found that vent location was important to allow adequate filling of mold; • Surface tension plays an important role in determining casting quality; • Casting and simulations high light the need for better characterized fluid physical and thermal properties; • Results from simulations will be incorporated in GACS design such as vent location and physical property characterization

  13. Parameter and cost optimizations for a modular stellarator reactor

    Science.gov (United States)

    Hitchon, W. N. G.; Johnson, P. C.; Watson, C. J. H.

    1983-02-01

    The physical scaling and cost scaling of a modular stellarator reactor are described. It is shown that configurations based on l=2 are best able to support adequate beta, and physical relationships are derived which enable the geometry and parameters of an l=2 modular stellarator to be defined. A cost scaling for the components of the nuclear island is developed using Starfire (tokamak reactor study) engineering as a basis. It is shown that for minimum cost the stellarator should be of small aspect ratio. For a 4000 MWth plant, as Starfire, the optimum configuration is a 15 coil, 3 field period, l=2 device with a major radius of 16 m and a plasma minor radius of 2 m; and with a conservative wall loading of 2 MW/m2 and an average beta of 3.9%; the estimated cost per kilowatt (electrical) is marginally (7%) greater than Starfire.

  14. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    OpenAIRE

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary seque...

  15. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  16. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu [Department of Biomedical Engineering, Marquette University, Milwaukee, Wisconsin 53201 (United States); Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D. [GE Healthcare, Waukesha, Wisconsin 53188 (United States)

    2016-01-15

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  17. Methods of Parametric Optimization of Thin-Walled Structures and Parameters which Influence on it

    Directory of Open Access Journals (Sweden)

    Kibkalo Anton

    2016-01-01

    Full Text Available The question of efficiency of thin-walled structures contains a number of contradictions. You need to select the best from all the existing structures on the criteria of optimization options. The search is conducted by varying of the parameters at parametric optimization. As a rule the aim of building structure optimization is reducing of material consumption, the labor input and cost. The costs of a particular variant of construction most full describes the given cost. There are two types of optimization parameters - immutable and varying. The result of the optimization of thin-walled beams will be a combination of parameters for each design situation in which provides the required strength and the minimum of the objective function - factory cost of production

  18. Study of human walking patterns based on the parameter optimization of a passive dynamic walking robot.

    Science.gov (United States)

    Zang, Xizhe; Liu, Xinyu; Zhu, Yanhe; Zhao, Jie

    2016-04-29

    The study of human walking patterns mainly focuses on how control affects walking because control schemes are considered to be dominant in human walking. This study proposes that not only fine control schemes but also optimized body segment parameters are responsible for humans' low-energy walking. A passive dynamic walker provides the possibility of analyzing the effect of parameters on walking efficiency because of its ability to walk without any control. Thus, a passive dynamic walking model with a relatively human-like structure was built, and a parameter optimization process based on the gait sensitivity norm was implemented to determine the optimal mechanical parameters by numerical simulation. The results were close to human body parameters, thus indicating that humans can walk under a passive pattern based on their body segment parameters. A quasi-passive walking prototype was built on the basis of the optimization results. Experiments showed that a passive robot with optimized parameters could walk on level ground with only a simple hip actuation. This result implies that humans can walk under a passive pattern based on their body segment parameters with only simple control strategy implying that humans can opt to walk instinctively under a passive pattern.

  19. Simulation and parameter optimization of polysilicon gate biaxial strained silicon MOSFETs

    CSIR Research Space (South Africa)

    Tsague, HD

    2015-10-01

    Full Text Available and Parameter Optimization of Polysilicon Gate Biaxial Strained Silicon MOSFETs Hippolyte Djonon Tsague Council for Scientific and Industrial Research (CSIR) Modelling and Digital Science (MDS) Pretoria, South Africa hdjonontsague...

  20. NACP VPRM NEE Parameters Optimized to North American Flux Tower Sites, 2000-2006

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides Vegetation Photosynthesis Respiration Model (VPRM) net ecosystem exchange (NEE) parameter values optimized to 65 flux tower sites across North...

  1. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  2. Automated sensing of thunderstorm characteristics and lightning parameters in the south of the European part of the Russian Federation

    Science.gov (United States)

    Adzhiev, Anatoly Kh.; Boldyreff, Anton S.; Kuliev, Dalkhat D.; Borisov, Igor V.; Korogodova, Irina V.

    2017-10-01

    In the present study the results of instrument measurements of thunderstorm activity and lightning parameters on the territory of the south of the European part of the Russian Federation is received. The automated hardware-software complex for thunderstorm activity sensing is developed. It consists of lightning sensors, electric field mills, meteorological radiolocator (MRL-5) for sensing the radar characteristics of clouds, meteorological stations and software for data collection, analysis and processing. Features of spatial-temporal variations of the thunderstorm activity and lightning parameters on the territory of 650 km around the measurement center of High-Mountain Geophysical Institute (Nalchik) were found. The relations between the quantities of intracloud, cloud-to-cloud and cloud-to-ground (positive and negative) discharges were studied. Parameters of current of the lightning discharges of different polarities under the flat and mountainous terrain were received and analyzed in the paper.

  3. Persistently-exciting signal generation for Optimal Parameter Estimation of constrained nonlinear dynamical systems.

    Science.gov (United States)

    Honório, Leonardo M; Costa, Exuperry Barros; Oliveira, Edimar J; Fernandes, Daniel de Almeida; Moreira, Antonio Paulo G M

    2018-04-13

    This work presents a novel methodology for Sub-Optimal Excitation Signal Generation and Optimal Parameter Estimation of constrained nonlinear systems. It is proposed that the evaluation of each signal must also account for the difference between real and estimated system parameters. However, this metric is not directly obtained once the real parameter values are not known. The alternative presented here is to adopt the hypothesis that, if a system can be approximated by a white box model, this model can be used as a benchmark to indicate the impact of a signal over the parametric estimation. In this way, the proposed method uses a dual layer optimization methodology: (i) Inner Level; For a given excitation signal a nonlinear optimization method searches for the optimal set of parameters that minimizes the error between the outputs of the optimized and benchmark models. (ii) At the outer level, a metaheuristic optimization method is responsible for constructing the best excitation signal, considering the fitness coming from the inner level, the quadratic difference between its parameters and the cost related to the time and space required to execute the experiment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  5. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion of so...

  6. Real-time parameter optimization based on neural network for smart injection molding

    Science.gov (United States)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  7. Optimization of parameters for coverage of low molecular weight proteins.

    Science.gov (United States)

    Müller, Stephan A; Kohajda, Tibor; Findeiss, Sven; Stadler, Peter F; Washietl, Stefan; Kellis, Manolis; von Bergen, Martin; Kalkhof, Stefan

    2010-12-01

    Proteins with molecular weights of cycle control. Despite their importance, the coverage of smaller proteins in standard proteome studies is rather sparse. Here we investigated biochemical and mass spectrometric parameters that influence coverage and validity of identification. The underrepresentation of low molecular weight (LMW) proteins may be attributed to the low numbers of proteolytic peptides formed by tryptic digestion as well as their tendency to be lost in protein separation and concentration/desalting procedures. In a systematic investigation of the LMW proteome of Escherichia coli, a total of 455 LMW proteins (27% of the 1672 listed in the SwissProt protein database) were identified, corresponding to a coverage of 62% of the known cytosolic LMW proteins. Of these proteins, 93 had not yet been functionally classified, and five had not previously been confirmed at the protein level. In this study, the influences of protein extraction (either urea or TFA), proteolytic digestion (solely, and the combined usage of trypsin and AspN as endoproteases) and protein separation (gel- or non-gel-based) were investigated. Compared to the standard procedure based solely on the use of urea lysis buffer, in-gel separation and tryptic digestion, the complementary use of TFA for extraction or endoprotease AspN for proteolysis permits the identification of an extra 72 (32%) and 51 proteins (23%), respectively. Regarding mass spectrometry analysis with an LTQ Orbitrap mass spectrometer, collision-induced fragmentation (CID and HCD) and electron transfer dissociation using the linear ion trap (IT) or the Orbitrap as the analyzer were compared. IT-CID was found to yield the best identification rate, whereas IT-ETD provided almost comparable results in terms of LMW proteome coverage. The high overlap between the proteins identified with IT-CID and IT-ETD allowed the validation of 75% of the identified proteins using this orthogonal fragmentation technique. Furthermore, a new

  8. Optimizing human-system interface automation design based on a skill-rule-knowledge framework

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.

    2010-01-01

    This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.

  9. Computer-automated multi-disciplinary analysis and design optimization of internally cooled turbine blades

    Science.gov (United States)

    Martin, Thomas Joseph

    This dissertation presents the theoretical methodology, organizational strategy, conceptual demonstration and validation of a fully automated computer program for the multi-disciplinary analysis, inverse design and optimization of convectively cooled axial gas turbine blades and vanes. Parametric computer models of the three-dimensional cooled turbine blades and vanes were developed, including the automatic generation of discretized computational grids. Several new analysis programs were written and incorporated with existing computational tools to provide computer models of the engine cycle, aero-thermodynamics, heat conduction and thermofluid physics of the internally cooled turbine blades and vanes. A generalized information transfer protocol was developed to provide the automatic mapping of geometric and boundary condition data between the parametric design tool and the numerical analysis programs. A constrained hybrid optimization algorithm controlled the overall operation of the system and guided the multi-disciplinary internal turbine cooling design process towards the objectives and constraints of engine cycle performance, aerodynamic efficiency, cooling effectiveness and turbine blade and vane durability. Several boundary element computer programs were written to solve the steady-state non-linear heat conduction equation inside the internally cooled and thermal barrier-coated turbine blades and vanes. The boundary element method (BEM) did not require grid generation inside the internally cooled turbine blades and vanes, so the parametric model was very robust. Implicit differentiations of the BEM thermal and thereto-elastic analyses were done to compute design sensitivity derivatives faster and more accurately than via explicit finite differencing. A factor of three savings of computer processing time was realized for two-dimensional thermal optimization problems, and a factor of twenty was obtained for three-dimensional thermal optimization problems

  10. Chemical Reactor Automation as a way to Optimize a Laboratory Scale Polymerization Process

    Science.gov (United States)

    Cruz-Campa, Jose L.; Saenz de Buruaga, Isabel; Lopez, Raymundo

    2004-10-01

    The automation of the registration and control of variables involved in a chemical reactor improves the reaction process by making it faster, optimized and without the influence of human error. The objective of this work is to register and control the involved variables (temperatures, reactive fluxes, weights, etc) in an emulsion polymerization reaction. The programs and control algorithms were developed in the language G in LabVIEW®. The designed software is able to send and receive RS232 codified data from the devices (pumps, temperature sensors, mixer, balances, and so on) to and from a personal Computer. The transduction from digital information to movement or measurement actions of the devices is done by electronic components included in the devices. Once the programs were done and proved, chemical reactions of emulsion polymerization were made to validate the system. Moreover, some advanced heat-estimation algorithms were implemented in order to know the heat caused by the reaction and the estimation and control of chemical variables in-line. All the information gotten from the reaction is stored in the PC. The information is then available and ready to use in any commercial data processor software. This work is now being used in a Research Center in order to make emulsion polymerizations under efficient and controlled conditions with reproducible results. The experiences obtained from this project may be used in the implementation of chemical estimation algorithms at pilot plant or industrial scale.

  11. An optimization design proposal of automated guided vehicles for mixed type transportation in hospital environments.

    Science.gov (United States)

    González, Domingo; Romero, Luis; Espinosa, María Del Mar; Domínguez, Manuel

    2017-01-01

    The aim of this paper is to present an optimization proposal in the automated guided vehicles design used in hospital logistics, as well as to analyze the impact of its implementation in a real environment. This proposal is based on the design of those elements that would allow the vehicles to deliver an extra cart by the towing method. So, the proposal intention is to improve the productivity and the performance of the current vehicles by using a transportation method of combined carts. The study has been developed following concurrent engineering premises from three different viewpoints. First, the sequence of operations has been described, and second, a proposal of design of the equipment has been undertaken. Finally, the impact of the proposal has been analyzed according to real data from the Hospital Universitario Rio Hortega in Valladolid (Spain). In this particular case, by the implementation of the analyzed proposal in the hospital a reduction of over 35% of the current time of use can be achieved. This result may allow adding new tasks to the vehicles, and according to this, both a new kind of vehicle and a specific module can be developed in order to get a better performance.

  12. Optimal affinity ranking for automated virtual screening validated in prospective D3R grand challenges

    Science.gov (United States)

    Wingert, Bentley M.; Oerlemans, Rick; Camacho, Carlos J.

    2018-01-01

    The goal of virtual screening is to generate a substantially reduced and enriched subset of compounds from a large virtual chemistry space. Critical in these efforts are methods to properly rank the binding affinity of compounds. Prospective evaluations of ranking strategies in the D3R grand challenges show that for targets with deep pockets the best correlations (Spearman ρ 0.5) were obtained by our submissions that docked compounds to the holo-receptors with the most chemically similar ligand. On the other hand, for targets with open pockets using multiple receptor structures is not a good strategy. Instead, docking to a single optimal receptor led to the best correlations (Spearman ρ 0.5), and overall performs better than any other method. Yet, choosing a suboptimal receptor for crossdocking can significantly undermine the affinity rankings. Our submissions that evaluated the free energy of congeneric compounds were also among the best in the community experiment. Error bars of around 1 kcal/mol are still too large to significantly improve the overall rankings. Collectively, our top of the line predictions show that automated virtual screening with rigid receptors perform better than flexible docking and other more complex methods.

  13. Network optimization including gas lift and network parameters under subsurface uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Baffoe, J.; Pajonk, O. [SPT Group GmbH, Hamburg (Germany); Badalov, H.; Huseynov, S. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE; Trick, M. [SPT Group, Calgary, AB (Canada)

    2013-08-01

    Optimization of oil and gas field production systems poses a great challenge to field development due to complex and multiple interactions between various operational design parameters and subsurface uncertainties. Conventional analytical methods are capable of finding local optima based on single deterministic models. They are less applicable for efficiently generating alternative design scenarios in a multi-objective context. Practical implementations of robust optimization workflows integrate the evaluation of alternative design scenarios and multiple realizations of subsurface uncertainty descriptions. Production or economic performance indicators such as NPV (Net Present Value) are linked to a risk-weighted objective function definition to guide the optimization processes. This work focuses on an integrated workflow using a reservoir-network simulator coupled to an optimization framework. The work will investigate the impact of design parameters while considering the physics of the reservoir, wells, and surface facilities. Subsurface uncertainties are described by well parameters such as inflow performance. Experimental design methods are used to investigate parameter sensitivities and interactions. Optimization methods are used to find optimal design parameter combinations which improve key performance indicators of the production network system. The proposed workflow will be applied to a representative oil reservoir coupled to a network which is modelled by an integrated reservoir-network simulator. Gas-lift will be included as an explicit measure to improve production. An objective function will be formulated for the net present value of the integrated system including production revenue and facility costs. Facility and gas lift design parameters are tuned to maximize NPV. Well inflow performance uncertainties are introduced with an impact on gas lift performance. Resulting variances on NPV are identified as a risk measure for the optimized system design. A

  14. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    Science.gov (United States)

    Dees, Patrick D.; Zwack, Mathew R.

    2017-01-01

    optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.

  15. Optimization of Indoor Thermal Comfort Parameters with the Adaptive Network-Based Fuzzy Inference System and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Li

    2017-01-01

    Full Text Available The goal of this study is to improve thermal comfort and indoor air quality with the adaptive network-based fuzzy inference system (ANFIS model and improved particle swarm optimization (PSO algorithm. A method to optimize air conditioning parameters and installation distance is proposed. The methodology is demonstrated through a prototype case, which corresponds to a typical laboratory in colleges and universities. A laboratory model is established, and simulated flow field information is obtained with the CFD software. Subsequently, the ANFIS model is employed instead of the CFD model to predict indoor flow parameters, and the CFD database is utilized to train ANN input-output “metamodels” for the subsequent optimization. With the improved PSO algorithm and the stratified sequence method, the objective functions are optimized. The functions comprise PMV, PPD, and mean age of air. The optimal installation distance is determined with the hemisphere model. Results show that most of the staff obtain a satisfactory degree of thermal comfort and that the proposed method can significantly reduce the cost of building an experimental device. The proposed methodology can be used to determine appropriate air supply parameters and air conditioner installation position for a pleasant and healthy indoor environment.

  16. Optimal Design of the Transverse Flux Machine Using a Fitted Genetic Algorithm with Real Parameters

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2012-01-01

    This paper applies a fitted genetic algorithm (GA) to the optimal design of transverse flux machine (TFM). The main goal is to provide a tool for the optimal design of TFM that is an easy to use. The GA optimizes the analytic basic design of two TFM topologies: the C-core and the U-core. First......, the GA was designed with real parameters. A further, objective of the fitted GA is minimization of the computation time, related to the number of individuals, the number of generations and the types of operators and their specific parameters....

  17. [Artificial neural network parameters optimization software and its application in the design of sustained release tablets].

    Science.gov (United States)

    Zhang, Xing-Yi; Chen, Da-Wei; Jin, Jie; Lu, Wei

    2009-10-01

    Artificial neural network (ANN) is a multi-objective optimization method that needs mathematic and statistic knowledge which restricts its application in the pharmaceutical research area. An artificial neural network parameters optimization software (ANNPOS) programmed by the Visual Basic language was developed to overcome this shortcoming. In the design of a sustained release formulation, the suitable parameters of ANN were estimated by the ANNPOS. And then the Matlab 5.0 Neural Network Toolbox was used to determine the optimal formulation. It showed that the ANNPOS reduced the complexity and difficulty in the ANN's application.

  18. The primary ion source for construction and optimization of operation parameters

    International Nuclear Information System (INIS)

    Synowiecki, A.; Gazda, E.

    1986-01-01

    The construction of primary ion source for SIMS has been presented. The influence of individual operation parameters on the properties of ion source has been investigated. Optimization of these parameters has allowed to appreciate usefulness of the ion source for SIMS study. 14 refs., 8 figs., 2 tabs. (author)

  19. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  20. Experimental Verification of Statistically Optimized Parameters for Low-Pressure Cold Spray Coating of Titanium

    Directory of Open Access Journals (Sweden)

    Damilola Isaac Adebiyi

    2016-06-01

    Full Text Available The cold spray coating process involves many process parameters which make the process very complex, and highly dependent and sensitive to small changes in these parameters. This results in a small operational window of the parameters. Consequently, mathematical optimization of the process parameters is key, not only to achieving deposition but also improving the coating quality. This study focuses on the mathematical identification and experimental justification of the optimum process parameters for cold spray coating of titanium alloy with silicon carbide (SiC. The continuity, momentum and the energy equations governing the flow through the low-pressure cold spray nozzle were solved by introducing a constitutive equation to close the system. This was used to calculate the critical velocity for the deposition of SiC. In order to determine the input temperature that yields the calculated velocity, the distribution of velocity, temperature, and pressure in the cold spray nozzle were analyzed, and the exit values were predicted using the meshing tool of Solidworks. Coatings fabricated using the optimized parameters and some non-optimized parameters are compared. The coating of the CFD-optimized parameters yielded lower porosity and higher hardness.

  1. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning.

    Science.gov (United States)

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost.

  2. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  3. Function Optimization and Parameter Performance Analysis Based on Gravitation Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-12-01

    Full Text Available The gravitational search algorithm (GSA is a kind of swarm intelligence optimization algorithm based on the law of gravitation. The parameter initialization of all swarm intelligence optimization algorithms has an important influence on the global optimization ability. Seen from the basic principle of GSA, the convergence rate of GSA is determined by the gravitational constant and the acceleration of the particles. The optimization performances on six typical test functions are verified by the simulation experiments. The simulation results show that the convergence speed of the GSA algorithm is relatively sensitive to the setting of the algorithm parameters, and the GSA parameter can be used flexibly to improve the algorithm’s convergence velocity and improve the accuracy of the solutions.

  4. Prediction Model of Battery State of Charge and Control Parameter Optimization for Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2015-07-01

    Full Text Available This paper presents the construction of a battery state of charge (SOC prediction model and the optimization method of the said model to appropriately control the number of parameters in compliance with the SOC as the battery output objectives. Research Centre for Electrical Power and Mechatronics, Indonesian Institute of Sciences has tested its electric vehicle research prototype on the road, monitoring its voltage, current, temperature, time, vehicle velocity, motor speed, and SOC during the operation. Using this experimental data, the prediction model of battery SOC was built. Stepwise method considering multicollinearity was able to efficiently develops the battery prediction model that describes the multiple control parameters in relation to the characteristic values such as SOC. It was demonstrated that particle swarm optimization (PSO succesfully and efficiently calculated optimal control parameters to optimize evaluation item such as SOC based on the model.

  5. Optimization of machining parameters of turning operations based on multi performance criteria

    Directory of Open Access Journals (Sweden)

    N.K.Mandal

    2013-01-01

    Full Text Available The selection of optimum machining parameters plays a significant role to ensure quality of product, to reduce the manufacturing cost and to increase productivity in computer controlled manufacturing process. For many years, multi-objective optimization of turning based on inherent complexity of process is a competitive engineering issue. This study investigates multi-response optimization of turning process for an optimal parametric combination to yield the minimum power consumption, surface roughness and frequency of tool vibration using a combination of a Grey relational analysis (GRA. Confirmation test is conducted for the optimal machining parameters to validate the test result. Various turning parameters, such as spindle speed, feed and depth of cut are considered. Experiments are designed and conducted based on full factorial design of experiment.

  6. Parameters-tuning of PID controller for automatic voltage regulators using the African buffalo optimization.

    Science.gov (United States)

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam; Noraziah, A

    2017-01-01

    In this paper, an attempt is made to apply the African Buffalo Optimization (ABO) to tune the parameters of a PID controller for an effective Automatic Voltage Regulator (AVR). Existing metaheuristic tuning methods have been proven to be quite successful but there were observable areas that need improvements especially in terms of the system's gain overshoot and steady steady state errors. Using the ABO algorithm where each buffalo location in the herd is a candidate solution to the Proportional-Integral-Derivative parameters was very helpful in addressing these two areas of concern. The encouraging results obtained from the simulation of the PID Controller parameters-tuning using the ABO when compared with the performance of Genetic Algorithm PID (GA-PID), Particle-Swarm Optimization PID (PSO-PID), Ant Colony Optimization PID (ACO-PID), PID, Bacteria-Foraging Optimization PID (BFO-PID) etc makes ABO-PID a good addition to solving PID Controller tuning problems using metaheuristics.

  7. Optimal Design of Material and Process Parameters in Powder Injection Molding

    Science.gov (United States)

    Ayad, G.; Barriere, T.; Gelin, J. C.; Song, J.; Liu, B.

    2007-04-01

    The paper is concerned with optimization and parametric identification for the different stages in Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders part by solid state diffusion. In the first part, one describes an original methodology to optimize the process and geometry parameters in injection stage based on the combination of design of experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometeric curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization of material and process parameters for manufacturing a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.

  8. LONGEVITY IMPROVEMENT OF DRIVE TOOTHED BELTS USING METHOD FOR OPTIMIZATION OF TECHNOLOGICAL MANUFACTURING PROCESS PARAMETERS

    Directory of Open Access Journals (Sweden)

    A. G. Bakhanovich

    2006-01-01

    Full Text Available Impact of technological process parameters (pressing pressure, duration and vulcanization temperature on drive toothed belt longevity has been investigated. Optimum parameters of the technological process that permit to improve a belt resource have been determined. Methodology for determination of a number of cycles intended for loading of belt teeth according to a test duration and transmission parameters has been developed. The paper presents results of industrial resource tests of drive toothed belts manufactured in accordance with an optimized technology

  9. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    Directory of Open Access Journals (Sweden)

    Huanqing Cui

    2017-03-01

    Full Text Available Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  10. Thermoeconomic optimization of heat recovery steam generators operating parameters for combined plants

    International Nuclear Information System (INIS)

    Casarosa, C.; Donatini, F.; Franco, A.

    2004-01-01

    The optimization of the heat recovery steam generator (HRSG) is particularly interesting for the combined plants design in order to maximise the work obtained in the vapour cycle. A detailed optimization of the HRSG is a very difficult problem, depending on several variables. The first step is represented by the optimization of the operating parameters. These are the number of pressure levels, the pressures, the mass flow ratio, and the inlet temperatures to the HRSG sections. The operating parameters can be determined by means both of a thermodynamic and of a thermoeconomic analysis, minimising a suitable objective function by analytical or numerical mathematical methods. In the paper, thermodynamic optimization is based on the minimization of exergy losses, while the thermoeconomic optimization is based on the minimization of the total HRSG cost, after the reduction to a common monetary base of the costs of exergy losses and of installation

  11. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  12. Automated Modal Parameter Estimation for Operational Modal Analysis of Large Systems

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice

    2007-01-01

    In this paper the problems of doing automatic modal parameter extraction and how to account for large number of data to process are considered. Two different approaches for obtaining the modal parameters automatically using OMA are presented: The Frequency Domain Decomposition (FDD) technique...

  13. The ESO-LV project - Automated parameter extraction for 16000 ESO/Uppsala galaxies

    NARCIS (Netherlands)

    Lauberts, Andris; Valentijn, Edwin A.

    1987-01-01

    A program to extract photometric and morphological parameters of the galaxies in the ESO/Uppsala survey (Lauberts and Valentijn, 1982) is discussed. The completeness and accuracy of the survey are evaluated and compared with other surveys. The parameters obtained in the program are listed.

  14. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  15. APPLICATION OF RANKING BASED ATTRIBUTE SELECTION FILTERS TO PERFORM AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION MODELS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-10-01

    Full Text Available In this paper, we study the performance of various models for automated evaluation of descriptive answers by using rank based feature selection filters for dimensionality reduction. We quantitatively analyze the best feature selection technique from amongst the five rank based feature selection techniques, namely Chi squared filter, Information gain filter, Gain ratio filter, Relief filter and Symmetrical uncertainty filter. We use Sequential Minimal Optimization with Polynomial kernel to build models and we evaluate the models across various parameters such as Accuracy, Time to build models, Kappa, Mean Absolute Error and Root Mean Squared Error. Except with Relief filter, for all other filters applied models, the accuracies obtained are at least 4% better than accuracies obtained with models with no filters applied. The accuracies recorded are same across Chi squared filter, Information gain filter, Gain ratio filter and Symmetrical Uncertainty filter. Therefore accuracy alone is not the determinant in selecting the best filter. The time taken to build models, Kappa, Mean absolute error and Root Mean Squared Error played a major role in determining the effectiveness of the filters. The overall rank aggregation metric of Symmetrical uncertainty filter is 45 and this is better by 1 rank than the rank aggregation metric of information gain attribute evaluation filter, the nearest contender to Symmetric attribute evaluation filter. Symmetric uncertainty rank aggregation metric is better by 3, 6, 112 ranks respectively when compared to rank aggregation metrics of Chi squared filter, Gain ratio filter and Relief filters. Through these quantitative measurements, we conclude that Symmetrical uncertainty attribute evaluation is the overall best performing rank based feature selection algorithm applicable for auto evaluation of descriptive answers.

  16. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in

  17. Application of Factorial Design for Gas Parameter Optimization in CO2 Laser Welding

    DEFF Research Database (Denmark)

    Gong, Hui; Dragsted, Birgitte; Olsen, Flemming Ove

    1997-01-01

    The effect of different gas process parameters involved in CO2 laser welding has been studied by applying two-set of three-level complete factorial designs. In this work 5 gas parameters, gas type, gas flow rate, gas blowing angle, gas nozzle diameter, gas blowing point-offset, are optimized...... to be a very useful tool for parameter optimi-zation in laser welding process. Keywords: CO2 laser welding, gas parameters, factorial design, Analysis of Variance........ The bead-on-plate welding specimens are evaluated by a number of quality char-acteristics, such as the penetration depth and the seam width. The significance of the gas pa-rameters and their interactions are based on the data found by the Analysis of Variance-ANOVA. This statistic methodology is proven...

  18. Multi-response optimization of Micro-EDM process parameters on AISI304 steel using TOPSIS

    Energy Technology Data Exchange (ETDEWEB)

    Manivannan, R.; Kumar, M. Pradeep [CEG, Anna University, Chennai (India)

    2016-01-15

    The Technique for order preference by similarity to ideal solution (TOPSIS) method of optimization is used to analyze the process parameters of the micro-Electrical discharge machining (micro-EDM) of an AISI 304 steel with multi-performance characteristics. The Taguchi method of experimental design L27 is performed to obtain the optimal parameters for inputs, including feed rate, current, pulse on time, and gap voltage. Several output responses, such as the material removal rate, electrode wear rate, overcut, taper angle, and circularity at entry and exit points, are analyzed for the optimal conditions. Among all the investigated parameters, feed rate exerts a greater influence on the hole quality. ANOVA is employed to identify the contribution of each experiment. The optimal level of parameter setting is maintained at a feed rate of 4 μm/s, a current of 10 A, a pulse on time of 10 μs, and a gap voltage of 10 V. Scanning electron microscope analysis is conducted to examine the hole quality. The experimental results indicate that the optimal level of the process parameter setting over the overall performance of the micro-EDM is improved through TOPSIS.

  19. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    Science.gov (United States)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-12-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  20. Optimization of long range potential interaction parameters in ion mobility spectrometry

    Science.gov (United States)

    Wu, Tianyang; Derrick, Joseph; Nahin, Minal; Chen, Xi; Larriba-Andaluz, Carlos

    2018-02-01

    The problem of optimizing Lennard-Jones (L-J) potential parameters to perform collision cross section (CCS) calculations in ion mobility spectrometry has been undertaken. The experimental CCS of 16 small organic molecules containing carbon, hydrogen, oxygen, nitrogen, and fluoride in N2 was compared to numerical calculations using Density Functional Theory (DFT). CCS calculations were performed using the momentum transfer algorithm IMoS and a 4-6-12 potential without incorporating the ion-quadrupole potential. A ceteris paribus optimization method was used to optimize the intercept σ and potential well-depth ɛ for the given atoms. This method yields important information that otherwise would remain concealed. Results show that the optimized L-J parameters are not necessarily unique with intercept and well-depth following an exponential relation at an existing line of minimums. Similarly, the method shows that some molecules containing atoms of interest may be ill-conditioned candidates to perform optimizations of the L-J parameters. The final calculated CCSs for the chosen parameters differ 1% on average from their experimental counterparts. This result conveys the notion that DFT calculations can indeed be used as potential candidates for CCS calculations and that effects, such as the ion-quadrupole potential or diffuse scattering, can be embedded into the L-J parameters without loss of accuracy but with a large increase in computational efficiency.

  1. Chaotic invasive weed optimization algorithm with application to parameter estimation of chaotic systems

    International Nuclear Information System (INIS)

    Ahmadi, Mohamadreza; Mojallali, Hamed

    2012-01-01

    Highlights: ► A new meta-heuristic optimization algorithm. ► Integration of invasive weed optimization and chaotic search methods. ► A novel parameter identification scheme for chaotic systems. - Abstract: This paper introduces a novel hybrid optimization algorithm by taking advantage of the stochastic properties of chaotic search and the invasive weed optimization (IWO) method. In order to deal with the weaknesses associated with the conventional method, the proposed chaotic invasive weed optimization (CIWO) algorithm is presented which incorporates the capabilities of chaotic search methods. The functionality of the proposed optimization algorithm is investigated through several benchmark multi-dimensional functions. Furthermore, an identification technique for chaotic systems based on the CIWO algorithm is outlined and validated by several examples. The results established upon the proposed scheme are also supplemented which demonstrate superior performance with respect to other conventional methods.

  2. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  3. Fault detection of feed water treatment process using PCA-WD with parameter optimization.

    Science.gov (United States)

    Zhang, Shirong; Tang, Qian; Lin, Yu; Tang, Yuling

    2017-05-01

    Feed water treatment process (FWTP) is an essential part of utility boilers; and fault detection is expected for its reliability improvement. Classical principal component analysis (PCA) has been applied to FWTPs in our previous work; however, the noises of T 2 and SPE statistics result in false detections and missed detections. In this paper, Wavelet denoise (WD) is combined with PCA to form a new algorithm, (PCA-WD), where WD is intentionally employed to deal with the noises. The parameter selection of PCA-WD is further formulated as an optimization problem; and PSO is employed for optimization solution. A FWTP, sustaining two 1000MW generation units in a coal-fired power plant, is taken as a study case. Its operation data is collected for following verification study. The results show that the optimized WD is effective to restrain the noises of T 2 and SPE statistics, so as to improve the performance of PCA-WD algorithm. And, the parameter optimization enables PCA-WD to get its optimal parameters in an automatic way rather than on individual experience. The optimized PCA-WD is further compared with classical PCA and sliding window PCA (SWPCA), in terms of four cases as bias fault, drift fault, broken line fault and normal condition, respectively. The advantages of the optimized PCA-WD, against classical PCA and SWPCA, is finally convinced with the results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Software integration for automated stability analysis and design optimization of a bearingless rotor blade

    Science.gov (United States)

    Gunduz, Mustafa Emre

    Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used

  5. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  6. Moving Toward an Optimal and Automated Geospatial Network for CCUS Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, Brendan Arthur [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-05

    Modifications in the global climate are being driven by the anthropogenic release of greenhouse gases (GHG) including carbon dioxide (CO2) (Middleton et al. 2014). CO2 emissions have, for example, been directly linked to an increase in total global temperature (Seneviratne et al. 2016). Strategies that limit CO2 emissions—like CO2 capture, utilization, and storage (CCUS) technology—can greatly reduce emissions by capturing CO2 before it is released to the atmosphere. However, to date CCUS technology has not been developed at a large commercial scale despite several promising high profile demonstration projects (Middleton et al. 2015). Current CCUS research has often focused on capturing CO2 emissions from coal-fired power plants, but recent research at Los Alamos National Laboratory (LANL) suggests focusing CCUS CO2 capture research upon industrial sources might better encourage CCUS deployment. To further promote industrial CCUS deployment, this project builds off current LANL research by continuing the development of a software tool called SimCCS, which estimates a regional system of transport to inject CO2 into sedimentary basins. The goal of SimCCS, which was first developed by Middleton and Bielicki (2009), is to output an automated and optimal geospatial industrial CCUS pipeline that accounts for industrial source and sink locations by estimating a Delaunay triangle network which also minimizes topographic and social costs (Middleton and Bielicki 2009). Current development of SimCCS is focused on creating a new version that accounts for spatial arrangements that were not available in the previous version. This project specifically addresses the issue of non-unique Delaunay triangles by adding additional triangles to the network, which can affect how the CCUS network is calculated.

  7. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  8. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  9. "Body-In-The-Loop": Optimizing Device Parameters Using Measures of Instantaneous Energetic Cost.

    Science.gov (United States)

    Felt, Wyatt; Selinger, Jessica C; Donelan, J Maxwell; Remy, C David

    2015-01-01

    This paper demonstrates methods for the online optimization of assistive robotic devices such as powered prostheses, orthoses and exoskeletons. Our algorithms estimate the value of a physiological objective in real-time (with a body "in-the-loop") and use this information to identify optimal device parameters. To handle sensor data that are noisy and dynamically delayed, we rely on a combination of dynamic estimation and response surface identification. We evaluated three algorithms (Steady-State Cost Mapping, Instantaneous Cost Mapping, and Instantaneous Cost Gradient Search) with eight healthy human subjects. Steady-State Cost Mapping is an established technique that fits a cubic polynomial to averages of steady-state measures at different parameter settings. The optimal parameter value is determined from the polynomial fit. Using a continuous sweep over a range of parameters and taking into account measurement dynamics, Instantaneous Cost Mapping identifies a cubic polynomial more quickly. Instantaneous Cost Gradient Search uses a similar technique to iteratively approach the optimal parameter value using estimates of the local gradient. To evaluate these methods in a simple and repeatable way, we prescribed step frequency via a metronome and optimized this frequency to minimize metabolic energetic cost. This use of step frequency allows a comparison of our results to established techniques and enables others to replicate our methods. Our results show that all three methods achieve similar accuracy in estimating optimal step frequency. For all methods, the average error between the predicted minima and the subjects' preferred step frequencies was less than 1% with a standard deviation between 4% and 5%. Using Instantaneous Cost Mapping, we were able to reduce subject walking-time from over an hour to less than 10 minutes. While, for a single parameter, the Instantaneous Cost Gradient Search is not much faster than Steady-State Cost Mapping, the Instantaneous

  10. Optimization of Allelic Combinations Controlling Parameters of a Peach Quality Model.

    Science.gov (United States)

    Quilot-Turion, Bénédicte; Génard, Michel; Valsesia, Pierre; Memmah, Mohamed-Mahmoud

    2016-01-01

    Process-based models are effective tools to predict the phenotype of an individual in different growing conditions. Combined with a quantitative trait locus (QTL) mapping approach, it is then possible to predict the behavior of individuals with any combinations of alleles. However the number of simulations to explore the realm of possibilities may become infinite. Therefore, the use of an efficient optimization algorithm to intelligently explore the search space becomes imperative. The optimization algorithm has to solve a multi-objective problem, since the phenotypes of interest are usually a complex of traits, to identify the individuals with best tradeoffs between those traits. In this study we proposed to unroll such a combined approach in the case of peach fruit quality described through three targeted traits, using a process-based model with seven parameters controlled by QTL. We compared a current approach based on the optimization of the values of the parameters with a more evolved way to proceed which consists in the direct optimization of the alleles controlling the parameters. The optimization algorithm has been adapted to deal with both continuous and combinatorial problems. We compared the spaces of parameters obtained with different tactics and the phenotype of the individuals resulting from random simulations and optimization in these spaces. The use of a genetic model enabled the restriction of the dimension of the parameter space toward more feasible combinations of parameter values, reproducing relationships between parameters as observed in a real progeny. The results of this study demonstrated the potential of such an approach to refine the solutions toward more realistic ideotypes. Perspectives of improvement are discussed.

  11. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    Science.gov (United States)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  12. Evaluation and optimization of footwear comfort parameters using finite element analysis and a discrete optimization algorithm

    Science.gov (United States)

    Papagiannis, P.; Azariadis, P.; Papanikos, P.

    2017-10-01

    Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.

  13. Optimal parameters for the Green-Ampt infiltration model under rainfall conditions

    Directory of Open Access Journals (Sweden)

    Chen Li

    2015-06-01

    Full Text Available The Green-Ampt (GA model is widely used in hydrologic studies as a simple, physically-based method to estimate infiltration processes. The accuracy of the model for applications under rainfall conditions (as opposed to initially ponded situations has not been studied extensively. We compared calculated rainfall infiltration results for various soils obtained using existing GA parameterizations with those obtained by solving the Richards equation for variably saturated flow. Results provided an overview of GA model performance evaluated by means of a root-meansquare- error-based objective function across a large region in GA parameter space as compared to the Richards equation, which showed a need for seeking optimal GA parameters. Subsequent analysis enabled the identification of optimal GA parameters that provided a close fit with the Richards equation. The optimal parameters were found to substantially outperform the standard theoretical parameters, thus improving the utility and accuracy of the GA model for infiltration simulations under rainfall conditions. A sensitivity analyses indicated that the optimal parameters may change for some rainfall scenarios, but are relatively stable for high-intensity rainfall events.

  14. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    International Nuclear Information System (INIS)

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-01-01

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  15. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  16. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  17. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R. Venkata; Rai, Dhiraj P.

    2017-01-01

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  18. Themoeconomic optimization of triple pressure heat recovery steam generator operating parameters for combined cycle plants

    Directory of Open Access Journals (Sweden)

    Mohammd Mohammed S.

    2015-01-01

    Full Text Available The aim of this work is to develop a method for optimization of operating parameters of a triple pressure heat recovery steam generator. Two types of optimization: (a thermodynamic and (b thermoeconomic were preformed. The purpose of the thermodynamic optimization is to maximize the efficiency of the plant. The selected objective for this purpose is minimization of the exergy destruction in the heat recovery steam generator (HRSG. The purpose of the thermoeconomic optimization is to decrease the production cost of electricity. Here, the total annual cost of HRSG, defined as a sum of annual values of the capital costs and the cost of the exergy destruction, is selected as the objective function. The optimal values of the most influencing variables are obtained by minimizing the objective function while satisfying a group of constraints. The optimization algorithm is developed and tested on a case of CCGT plant with complex configuration. Six operating parameters were subject of optimization: pressures and pinch point temperatures of every three (high, intermediate and low pressure steam stream in the HRSG. The influence of these variables on the objective function and production cost are investigated in detail. The differences between results of thermodynamic and the thermoeconomic optimization are discussed.

  19. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Rao, R. Venkata; Rai, Dhiraj P. [Sardar Vallabhbhai National Institute of Technology, Gujarat (India)

    2017-05-15

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  20. Multiscale analysis of the correlation of processing parameters on viscidity of composites fabricated by automated fiber placement

    Science.gov (United States)

    Han, Zhenyu; Sun, Shouzheng; Fu, Yunzhong; Fu, Hongya

    2017-10-01

    Viscidity is an important physical indicator for assessing fluidity of resin that is beneficial to contact resin with the fibers effectively and reduce manufacturing defects during automated fiber placement (AFP) process. However, the effect of processing parameters on viscidity evolution is rarely studied during AFP process. In this paper, viscidities under different scales are analyzed based on multi-scale analysis method. Firstly, viscous dissipation energy (VDE) within meso-unit under different processing parameters is assessed by using finite element method (FEM). According to multi-scale energy transfer model, meso-unit energy is used as the boundary condition for microscopic analysis. Furthermore, molecular structure of micro-system is built by molecular dynamics (MD) method. And viscosity curves are then obtained by integrating stress autocorrelation function (SACF) with time. Finally, the correlation characteristics of processing parameters to viscosity are revealed by using gray relational analysis method (GRAM). A group of processing parameters is found out to achieve the stability of viscosity and better fluidity of resin.

  1. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  2. Thermo-mechanical simulation and parameters optimization for beam blank continuous casting

    International Nuclear Information System (INIS)

    Chen, W.; Zhang, Y.Z.; Zhang, C.J.; Zhu, L.G.; Lu, W.G.; Wang, B.X.; Ma, J.H.

    2009-01-01

    The objective of this work is to optimize the process parameters of beam blank continuous casting in order to ensure high quality and productivity. A transient thermo-mechanical finite element model is developed to compute the temperature and stress profile in beam blank continuous casting. By comparing the calculated data with the metallurgical constraints, the key factors causing defects of beam blank can be found out. Then based on the subproblem approximation method, an optimization program is developed to search out the optimum cooling parameters. Those optimum parameters can make it possible to run the caster at its maximum productivity, minimum cost and to reduce the defects. Now, online verifying of this optimization project has been put in practice, which can prove that it is very useful to control the actual production

  3. Numerical Parameter Optimization of the Ignition and Growth Model for HMX Based Plastic Bonded Explosives

    Science.gov (United States)

    Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence

    2017-06-01

    We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.

  4. Optimization of TRPO process parameters for americium extraction from high level waste

    International Nuclear Information System (INIS)

    Chen Jing; Wang Jianchen; Song Chongli

    2001-01-01

    The numerical calculations for Am multistage fractional extraction by trialkyl phosphine oxide (TRPO) were verified by a hot test. 1750L/t-U high level waste (HLW) was used as the feed to the TRPO process. The analysis used the simple objective function to minimize the total waste content in the TRPO process streams. Some process parameters were optimized after other parameters were selected. The optimal process parameters for Am extraction by TRPO are: 10 stages for extraction and 2 stages for scrubbing; a flow rate ratio of 0.931 for extraction and 4.42 for scrubbing; nitric acid concentration of 1.35 mol/L for the feed and 0.5 mol/L for the scrubbing solution. Finally, the nitric acid and Am concentration profiles in the optimal TRPO extraction process are given

  5. Direct modeling parameter signature analysis and failure mode prediction of physical systems using hybrid computer optimization

    Science.gov (United States)

    Drake, R. L.; Duvoisin, P. F.; Asthana, A.; Mather, T. W.

    1971-01-01

    High speed automated identification and design of dynamic systems, both linear and nonlinear, are discussed. Special emphasis is placed on developing hardware and techniques which are applicable to practical problems. The basic modeling experiment and new results are described. Using the improvements developed successful identification of several systems, including a physical example as well as simulated systems, was obtained. The advantages of parameter signature analysis over signal signature analysis in go-no go testing of operational systems were demonstrated. The feasibility of using these ideas in failure mode prediction in operating systems was also investigated. An improved digital controlled nonlinear function generator was developed, de-bugged, and completely documented.

  6. A multicriteria framework with voxel-dependent parameters for radiotherapy treatment plan optimization

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Uribe-Sanchez, Andres F.; Li, Nan; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To establish a new mathematical framework for radiotherapy treatment optimization with voxel-dependent optimization parameters. Methods: In the treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for a set of the objective functions associated to the organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. Furthermore, questions about the objective function selection and parameter adjustment to assure Pareto optimality as well as the relationship between the optimal solutions obtained from the organ-based and voxel-based models remain unanswered. To answer these questions, the authors establish in this work a new mathematical framework equipped with two theorems. Results: The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the impact of using different objective functions on plan qualities and Pareto surfaces. The main discoveries are threefold: (1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model since the Pareto surface is independent of the objective function selection and the entire Pareto surface could be generated as long as the objective function satisfies certain mathematical conditions; (2) All Pareto solutions generated by the organ-based model with different objective functions are parts of a unique Pareto surface generated by the voxel-based model with any appropriate objective function; (3) A much larger Pareto surface is explored by adjusting voxel-dependent parameters than by adjusting organ-dependent parameters, possibly

  7. A multicriteria framework with voxel-dependent parameters for radiotherapy treatment plan optimization.

    Science.gov (United States)

    Zarepisheh, Masoud; Uribe-Sanchez, Andres F; Li, Nan; Jia, Xun; Jiang, Steve B

    2014-04-01

    To establish a new mathematical framework for radiotherapy treatment optimization with voxel-dependent optimization parameters. In the treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for a set of the objective functions associated to the organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. Furthermore, questions about the objective function selection and parameter adjustment to assure Pareto optimality as well as the relationship between the optimal solutions obtained from the organ-based and voxel-based models remain unanswered. To answer these questions, the authors establish in this work a new mathematical framework equipped with two theorems. The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the impact of using different objective functions on plan qualities and Pareto surfaces. The main discoveries are threefold: (1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model since the Pareto surface is independent of the objective function selection and the entire Pareto surface could be generated as long as the objective function satisfies certain mathematical conditions; (2) All Pareto solutions generated by the organ-based model with different objective functions are parts of a unique Pareto surface generated by the voxel-based model with any appropriate objective function; (3) A much larger Pareto surface is explored by adjusting voxel-dependent parameters than by adjusting organ-dependent parameters, possibly allowing for the

  8. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilcox, Ian Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandoval, Andrew J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reza, Shahed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction and portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.

  9. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    Science.gov (United States)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  10. Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yourim Yoon

    2015-01-01

    Full Text Available This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.

  11. An approach to design controllers for MIMO fractional-order plants based on parameter optimization algorithm.

    Science.gov (United States)

    Xue, Dingyü; Li, Tingxue

    2017-04-27

    The parameter optimization method for multivariable systems is extended to the controller design problems for multiple input multiple output (MIMO) square fractional-order plants. The algorithm can be applied to search for the optimal parameters of integer-order controllers for fractional-order plants with or without time delays. Two examples are given to present the controller design procedures for MIMO fractional-order systems. Simulation studies show that the integer-order controllers designed are robust to plant gain variations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Warpage minimization on wheel caster by optimizing process parameters using response surface methodology (RSM)

    Science.gov (United States)

    Safuan, N. S.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    In injection moulding process, it is important to keep the productivity increase constantly with least of waste produced such as warpage defect. Thus, this study is concerning on minimizing warpage defect on wheel caster part. Apart from eliminating product wastes, this project also giving out best optimization techniques using response surface methodology. This research studied on five parameters A-packing pressure, B-packing time, C-mold temperature, D-melting temperature and E-cooling time. The optimization showed that packing pressure is the most significant parameter. Warpage have been improved 42.64% from 0.6524 mm to 0.3742mm.

  13. A Particle Swarm Optimization of Natural Ventilation Parameters in a Greenhouse with Continuous Roof Vents

    Directory of Open Access Journals (Sweden)

    Abdelhafid HASNI

    2009-03-01

    Full Text Available Although natural ventilation plays an important role in the affecting greenhouse climate, as defined by temperature, humidity and CO2 concentration, particularly in Mediterranean countries, little information and data are presently available on full-scale greenhouse ventilation mechanisms. In this paper, we present a new method for selecting the parameters based on a particle swarm optimization (PSO algorithm which optimize the choice of parameters by minimizing a cost function. The simulator was based on a published model with some minor modifications as we were interested in the parameter of ventilation. The function is defined by a reduced model that could be used to simulate and predict the greenhouse environment, as well as the tuning methods to compute their parameters. This study focuses on the dynamic behavior of the inside air temperature and humidity during ventilation. Our approach is validated by comparison with some experimental results. Various experimental techniques were used to make full-scale measurements of the air exchange rate in a 400 m2 plastic greenhouse. The model which we propose based on natural ventilation parameters optimized by a particle swarm optimization was compared with the measurements results.

  14. Optimization and validation of automated hippocampal subfield segmentation across the lifespan.

    Science.gov (United States)

    Bender, Andrew R; Keresztes, Attila; Bodammer, Nils C; Shing, Yee Lee; Werkle-Bergner, Markus; Daugherty, Ana M; Yu, Qijing; Kühn, Simone; Lindenberger, Ulman; Raz, Naftali

    2018-02-01

    Automated segmentation of hippocampal (HC) subfields from magnetic resonance imaging (MRI) is gaining popularity, but automated procedures that afford high speed and reproducibility have yet to be extensively validated against the standard, manual morphometry. We evaluated the concurrent validity of an automated method for hippocampal subfields segmentation (automated segmentation of hippocampal subfields, ASHS; Yushkevich et al., ) using a customized atlas of the HC body, with manual morphometry as a standard. We built a series of customized atlases comprising the entorhinal cortex (ERC) and subfields of the HC body from manually segmented images, and evaluated the correspondence of automated segmentations with manual morphometry. In samples with age ranges of 6-24 and 62-79 years, 20 participants each, we obtained validity coefficients (intraclass correlations, ICC) and spatial overlap measures (dice similarity coefficient) that varied substantially across subfields. Anterior and posterior HC body evidenced the greatest discrepancies between automated and manual segmentations. Adding anterior and posterior slices for atlas creation and truncating automated output to the ranges manually defined by multiple neuroanatomical landmarks substantially improved the validity of automated segmentation, yielding ICC above 0.90 for all subfields and alleviating systematic bias. We cross-validated the developed atlas on an independent sample of 30 healthy adults (age 31-84) and obtained good to excellent agreement: ICC (2) = 0.70-0.92. Thus, with described customization steps implemented by experts trained in MRI neuroanatomy, ASHS shows excellent concurrent validity, and can become a promising method for studying age-related changes in HC subfield volumes. © 2017 Wiley Periodicals, Inc.

  15. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors

    Directory of Open Access Journals (Sweden)

    Jilin Zhang

    2017-09-01

    Full Text Available In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT. Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP, which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS. This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  16. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    Science.gov (United States)

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  17. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors

    Science.gov (United States)

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-01-01

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163

  18. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  19. Parameter estimation of a pulp digester model with derivative-free optimization strategies

    Science.gov (United States)

    Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.

    2017-07-01

    The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.

  20. Factorization and the synthesis of optimal feedback gains for distributed parameter systems

    Science.gov (United States)

    Milman, Mark H.; Scheid, Robert E.

    1990-01-01

    An approach based on Volterra factorization leads to a new methodology for the analysis and synthesis of the optimal feedback gain in the finite-time linear quadratic control problem for distributed parameter systems. The approach circumvents the need for solving and analyzing Riccati equations and provides a more transparent connection between the system dynamics and the optimal gain. The general results are further extended and specialized for the case where the underlying state is characterized by autonomous differential-delay dynamics. Numerical examples are given to illustrate the second-order convergence rate that is derived for an approximation scheme for the optimal feedback gain in the differential-delay problem.

  1. A parameter estimation for DC servo motor by using optimization process

    International Nuclear Information System (INIS)

    Arjoni Amir

    2010-01-01

    Modeling and simulation parameters of DC servo motor using Matlab Simulink software have been done. The objective to define the DC servo motor parameter estimation is to get DC servo motor parameter values (B, La, Ra, Km, J) which are significant value that can be used for actuation process of control systems. In the analysis of control systems DC the servo motor expressed by transfer function equation to make faster to be analyzed as a component of the actuator. To obtain the data model parameters and initial conditions of the DC servo motors is then carried out the processor modeling and simulation in which the DC servo motor combined with other components. To obtain preliminary data of the DC servo motor parameters as estimated venue, it is obtained from the data factory of the DC servo motor. The initial data parameters of the DC servo motor are applied for the optimization process by using nonlinear least square algorithm and minimize the cost function value so that the DC servo motors parameter values are obtained significantly. The result of the optimization process of the DC servo motor parameter values are B = 0.039881, J= 1.2608e-007, Km = 0.069648, La = 2.3242e-006 and Ra = 1.8837. (author)

  2. Optimization of the dressing parameters in cylindrical grinding based on a generalized utility function

    Science.gov (United States)

    Aleksandrova, Irina

    2016-01-01

    The existing studies, concerning the dressing process, focus on the major influence of the dressing conditions on the grinding response variables. However, the choice of the dressing conditions is often made, based on the experience of the qualified staff or using data from reference books. The optimal dressing parameters, which are only valid for the particular methods and dressing and grinding conditions, are also used. The paper presents a methodology for optimization of the dressing parameters in cylindrical grinding. The generalized utility function has been chosen as an optimization parameter. It is a complex indicator determining the economic, dynamic and manufacturing characteristics of the grinding process. The developed methodology is implemented for the dressing of aluminium oxide grinding wheels by using experimental diamond roller dressers with different grit sizes made of medium- and high-strength synthetic diamonds type ??32 and ??80. To solve the optimization problem, a model of the generalized utility function is created which reflects the complex impact of dressing parameters. The model is built based on the results from the conducted complex study and modeling of the grinding wheel lifetime, cutting ability, production rate and cutting forces during grinding. They are closely related to the dressing conditions (dressing speed ratio, radial in-feed of the diamond roller dresser and dress-out time), the diamond roller dresser grit size/grinding wheel grit size ratio, the type of synthetic diamonds and the direction of dressing. Some dressing parameters are determined for which the generalized utility function has a maximum and which guarantee an optimum combination of the following: the lifetime and cutting ability of the abrasive wheels, the tangential cutting force magnitude and the production rate of the grinding process. The results obtained prove the possibility of control and optimization of grinding by selecting particular dressing

  3. A new framework for analysing automated acoustic species-detection data: occupancy estimation and optimization of recordings post-processing

    Science.gov (United States)

    Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.

    2018-01-01

    The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.

  4. Extending amulti-scale parameter regionalization (MPR) method by introducing parameter constrained optimization and flexible transfer functions

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2015-04-01

    A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and

  5. A shot parameter specification subsystem for automated control of PBFA II accelerator shots

    International Nuclear Information System (INIS)

    Spiller, J.L.

    1987-01-01

    The author reports on the shot parameter specification subsystem (SPSS), an integral part of the automatic control system developed for the Particle Beam Fusion Accelerator II (PBFA II). This system has been designed to fully utilize the accelerator by tailoring shot parameters to the needs of the experimenters. The SPSS is the key to this flexibility. Automatic systems will be required on many pulsed power machines for the fastest turnaround, the highest reliability, and most cost effective operation. These systems will require the flexibility and the ease of use that is part of the SPSS. The author discusses how the PBFA II control system has proved to be an effective modular system, flexible enough to meet the demands of both the fast track construction of PBFA II and the control needs of Hermes III. This system is expected to meet the demands of most future machine changes

  6. A novel level set model with automated initialization and controlling parameters for medical image segmentation.

    Science.gov (United States)

    Liu, Qingyi; Jiang, Mingyan; Bai, Peirui; Yang, Guang

    2016-03-01

    In this paper, a level set model without the need of generating initial contour and setting controlling parameters manually is proposed for medical image segmentation. The contribution of this paper is mainly manifested in three points. First, we propose a novel adaptive mean shift clustering method based on global image information to guide the evolution of level set. By simple threshold processing, the results of mean shift clustering can automatically and speedily generate an initial contour of level set evolution. Second, we devise several new functions to estimate the controlling parameters of the level set evolution based on the clustering results and image characteristics. Third, the reaction diffusion method is adopted to supersede the distance regularization term of RSF-level set model, which can improve the accuracy and speed of segmentation effectively with less manual intervention. Experimental results demonstrate the performance and efficiency of the proposed model for medical image segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Optimization of the Machining parameter of LM6 Alminium alloy in CNC Turning using Taguchi method

    Science.gov (United States)

    Arunkumar, S.; Muthuraman, V.; Baskaralal, V. P. M.

    2017-03-01

    Due to widespread use of highly automated machine tools in the industry, manufacturing requires reliable models and methods for the prediction of output performance of machining process. In machining of parts, surface quality is one of the most specified customer requirements. In order for manufactures to maximize their gains from utilizing CNC turning, accurate predictive models for surface roughness must be constructed. The prediction of optimum machining conditions for good surface finish plays an important role in process planning. This work deals with the study and development of a surface roughness prediction model for machining LM6 aluminum alloy. Two important tools used in parameter design are Taguchi orthogonal arrays and signal to noise ratio (S/N). Speed, feed, depth of cut and coolant are taken as process parameter at three levels. Taguchi’s parameters design is employed here to perform the experiments based on the various level of the chosen parameter. The statistical analysis results in optimum parameter combination of speed, feed, depth of cut and coolant as the best for obtaining good roughness for the cylindrical components. The result obtained through Taguchi is confirmed with real time experimental work.

  8. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  9. Optimization of control parameters of a hot cold controller by means of Simplex type methods

    Science.gov (United States)

    Porte, C.; Caron-Poussin, M.; Carot, S.; Couriol, C.; Moreno, M. Martin; Delacroix, A.

    1997-01-01

    This paper describes a hot/cold controller for regulating crystallization operations. The system was identified with a common method (the Broida method) and the parameters were obtained by the Ziegler-Nichols method. The paper shows that this empirical method will only allow a qualitative approach to regulation and that, in some instances, the parameters obtained are unreliable and therefore cannot be used to cancel variations between the set point and the actual values. Optimization methods were used to determine the regulation parameters and solve this identcation problem. It was found that the weighted centroid method was the best one. PMID:18924791

  10. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...

  11. APPLICATION OF EXCEL INFORMATION TECHNOLOGIES FOR SOLVING PROBLEMS ON OPTIMIZATION OF WIRE BRUSHING PARAMETERS

    Directory of Open Access Journals (Sweden)

    S. I. Romanchuk

    2010-01-01

    Full Text Available The paper considers an application of  Excel  information technologies for optimization of parameters concerning of wire brushing process  depending on the requirements to quality of parts’ surface and that ensures their required operational characteristics.

  12. THE DETERMINATION OF THE OPTIMAL PARAMETERS OF THE BEARING ALLOYS MICROSTRUCTURE IN CONTACT FRICTION AREA

    Directory of Open Access Journals (Sweden)

    M. O. Kuzin

    2009-03-01

    Full Text Available The possibility of using the simulation structure models and variation models of mechanics is shown for finding quantity and size of antifriction alloy phase with raised wear resistance. The numerical realization of models displays that the optimal value of structure parameters of babbit B16 is 56 % of hardening phase SnSb with average size of 47 mkm.

  13. An analysis to optimize the process parameters of friction stir welded ...

    African Journals Online (AJOL)

    The friction stir welding (FSW) of steel is a challenging task. Experiments are conducted here, with a tool having a conical pin of 0.4mm clearance. The process parameters are optimized by using the Taguchi technique based on Taguchi's L9 orthogonal array. Experiments have been conducted based on three process ...

  14. Parameter optimization method for longitudinal vibration absorber of ship shaft system

    Directory of Open Access Journals (Sweden)

    LIU Jinlin

    2017-05-01

    Full Text Available The longitudinal vibration of the ship shaft system is the one of the most important factors of hull stern vibration, and it can be effectively minimized by installing a longitudinal vibration absorber. In this way, the vibration and noise of ships can be brought under control. However, the parameters of longitudinal vibration absorbers have a great influence on the vibration characteristics of the shaft system. As such, a certain shafting testing platform was studied as the object on which a finite model was built, and the relationship between longitudinal stiffness and longitudinal vibration in the shaft system was analyzed in a straight alignment state. Furthermore, a longitudinal damping model of the shaft system was built in which the parameters of the vibration absorber were non-dimensionalized, the weight of the vibration absorber was set as a constant, and an optimizing algorithm was used to calculate the optimized stiffness and damping coefficient of the vibration absorber. Finally, the longitudinal vibration frequency response of the shafting testing platform before and after optimizing the parameters of the longitudinal vibration absorber were compared, and the results indicated that the longitudinal vibration of the shafting testing platform was decreased effectively, which suggests that it could provide a theoretical foundation for the parameter optimization of longitudinal vibration absorbers.

  15. Modified temporal approach to meta-optimizing an extended Kalman filter's parameters

    CSIR Research Space (South Africa)

    Salmon, BP

    2014-07-01

    Full Text Available meta-optimization approach has been proposed in the literature for setting the parameters of the non-linear Extended Kalman Filter (EKF) to rapidly and efficiently estimate the features for these triply modulated cosine functions using spatial...

  16. Optimization of CVD parameters for long ZnO NWs grown on ITO ...

    Indian Academy of Sciences (India)

    The optimization of chemical vapour deposition (CVD) parameters for long and vertically aligned (VA) ZnO nanowires (NWs) were investigated. Typical ZnO NWs as a single crystal grown on indium tin oxide (ITO)-coated glass substrate were successfully synthesized. First, the conducted side of ITO–glass substrate was ...

  17. Taguchi optimization of machining parameters in drilling of AISI D2 ...

    Indian Academy of Sciences (India)

    This study focused on using the Taguchi technique to optimize the process parameters in drilling of AISI D2 steel with carbide drills to minimize the surface roughness (Ra) and thrust forces (Ff). The drilling experiments were conducted on a CNC vertical machining centre according to the L18 experimental design. Uncoated ...

  18. Bounds on Entanglement Dimensions and Quantum Graph Parameters via Noncommutative Polynomial Optimization

    NARCIS (Netherlands)

    Gribling, Sander; de Laat, David; Laurent, Monique

    2017-01-01

    In this paper we study bipartite quantum correlations using techniques from tracial polynomial optimization. We construct a hierarchy of semidefinite programming lower bounds on the minimal entanglement dimension of a bipartite correlation. This hierarchy converges to a new parameter: the minimal

  19. Fast reactor parameter optimization taking into account changes in fuel charge type during reactor operation time

    International Nuclear Information System (INIS)

    Afrin, B.A.; Rechnov, A.V.; Usynin, G.B.

    1987-01-01

    The formulation and solution of optimization problem for parameters determining the layout of the central part of sodium cooled power reactor taking into account possible changes in fuel charge type during reactor operation time are performed. The losses under change of fuel composition type for two reactor modifications providing for minimum doubling time for oxide and carbide fuels respectively, are estimated

  20. Optimization of Temperature Schedule Parameters on Heat Supply in Power-and-Heat Supply Systems

    Directory of Open Access Journals (Sweden)

    V. A. Sednin

    2009-01-01

    Full Text Available The paper considers problems concerning optimization of a temperature schedule in the district heating systems with steam-turbine thermal power stations having average initial steam parameters. It has been shown in the paper that upkeeping of an optimum network water temperature permits to increase an energy efficiency of heat supply due to additional systematic saving of fuel. 

  1. Optimization of CVD parameters for long ZnO NWs grown on ITO

    Indian Academy of Sciences (India)

    The optimization of chemical vapour deposition (CVD) parameters for long and vertically aligned (VA) ZnO nanowires (NWs) were investigated. Typical ZnO NWs as a single crystal grown on indium tin oxide (ITO)-coated glass substrate were successfully synthesized. First, the conducted side of ITO–glass substrate was ...

  2. Optimization of AVR Parameters of a Multi-machine Power System ...

    African Journals Online (AJOL)

    In this paper, a method for optimizing the parameters of Automatic Voltage Regulation (AVR) system installed on the generators of a multi-machine power system using Artificial Intelligence (AI) techniques is presented. Each AVR system is equipped with a PID (Proportional, Integral and Derivative) controller and a Power ...

  3. Process parameters optimization of needle-punched nonwovens for sound absorption application

    CSIR Research Space (South Africa)

    Mvubu, M

    2015-12-01

    Full Text Available This paper reports a study on the optimization of process parameters of needle-punched nonwoven fabrics for achieving maximum sound absorption by employing a Box-Behnken factorial design. The influence of fiber type, depth of needle penetration...

  4. Optimization of WEDM process parameters using deep cryo-treated Inconel 718 as work material

    Directory of Open Access Journals (Sweden)

    Bijaya Bijeta Nayak

    2016-03-01

    Full Text Available The present work proposes an experimental investigation and optimization of various process parameters during taper cutting of deep cryo-treated Inconel 718 in wire electrical discharge machining process. Taguchi's design of experiment is used to gather information regarding the process with less number of experimental runs considering six input parameters such as part thickness, taper angle, pulse duration, discharge current, wire speed and wire tension. Since traditional Taguchi method fails to optimize multiple performance characteristics, maximum deviation theory is applied to convert multiple performance characteristics into an equivalent single performance characteristic. Due to the complexity and non-linearity involved in this process, good functional relationship with reasonable accuracy between performance characteristics and process parameters is difficult to obtain. To address this issue, the present study proposes artificial neural network (ANN model to determine the relationship between input parameters and performance characteristics. Finally, the process model is optimized to obtain a best parametric combination by a new meta-heuristic approach known as bat algorithm. The results of the proposed algorithm show that the proposed method is an effective tool for simultaneous optimization of performance characteristics during taper cutting in WEDM process.

  5. 'Adaptive Importance Sampling for Performance Evaluation and Parameter Optimization of Communication Systems'

    NARCIS (Netherlands)

    Remondo Bueno, D.; Srinivasan, R.; Nicola, V.F.; van Etten, Wim; Tattje, H.E.P.

    2000-01-01

    We present new adaptive importance sampling techniques based on stochastic Newton recursions. Their applicability to the performance evaluation of communication systems is studied. Besides bit-error rate (BER) estimation, the techniques are used for system parameter optimization. Two system models

  6. Optimization of AVR Parameters of a Multi-machine Power System ...

    African Journals Online (AJOL)

    user1

    In this paper, a method for optimizing the parameters of Automatic Voltage Regulation (AVR) system installed on the generators of a multi-machine power system using Artificial Intelligence. (AI) techniques is presented. Each AVR system is equipped with a PID (Proportional, Integral and Derivative) controller and a Power ...

  7. Optimal parameters of dental ultrasonic instrument diamond coating for enamel removal

    Directory of Open Access Journals (Sweden)

    Yunn-Shiuan Liao

    2015-06-01

    Conclusion: Protrusion, shape, and density of diamonds of an ultrasonic dental tip are significantly related to the MRR of enamel, and the optimal combination of these parameters is obtained. Knowledge of the importance of these variables will help in more effective use of the ultrasonic technology in dentistry.

  8. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  9. Cellular Neural Networks: A genetic algorithm for parameters optimization in artificial vision applications

    International Nuclear Information System (INIS)

    Taraglio, S.; Zanela, A.

    1997-03-01

    An optimization method for some of the CNN's (Cellular Neural Network) parameters, based on evolutionary strategies, is proposed. The new class of feedback template found is more effective in extracting features from the images that an autonomous vehicle acquires, than in the previous CNN's literature

  10. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton–Vargas two-dimensional snowplow model

    International Nuclear Information System (INIS)

    Auluck, S K H

    2014-01-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton–Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance. (paper)

  11. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton-Vargas two-dimensional snowplow model

    Science.gov (United States)

    Auluck, S. K. H.

    2014-12-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance.

  12. Q-Learning Multi-Objective Sequential Optimal Sensor Parameter Weights

    Directory of Open Access Journals (Sweden)

    Raquel Cohen

    2016-04-01

    Full Text Available The goal of our solution is to deliver trustworthy decision making analysis tools which evaluate situations and potential impacts of such decisions through acquired information and add efficiency for continuing mission operations and analyst information.We discuss the use of cooperation in modeling and simulation and show quantitative results for design choices to resource allocation. The key contribution of our paper is to combine remote sensing decision making with Nash Equilibrium for sensor parameter weighting optimization. By calculating all Nash Equilibrium possibilities per period, optimization of sensor allocation is achieved for overall higher system efficiency. Our tool provides insight into what are the most important or optimal weights for sensor parameters and can be used to efficiently tune those weights.

  13. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  14. Combination of Compensations and Multi-Parameter Coil for Efficiency Optimization of Inductive Power Transfer System

    Directory of Open Access Journals (Sweden)

    Guozhen Hu

    2017-12-01

    Full Text Available A loosely coupled inductive power transfer (IPT system for industrial track applications has been researched in this paper. The IPT converter using primary Inductor-Capacitor-Inductor (LCL network and secondary parallel-compensations is analyzed combined coil design for optimal operating efficiency. Accurate mathematical analytical model and expressions of self-inductance and mutual inductance are proposed to achieve coil parameters. Furthermore, the optimization process is performed combined with the proposed resonant compensations and coil parameters. The results are evaluated and discussed using finite element analysis (FEA. Finally, an experimental prototype is constructed to verify the proposed approach and the experimental results show that the optimization can be better applied to industrial track distributed IPT system.

  15. Study on Parameter Optimization for Support Vector Regression in Solving the Inverse ECG Problem

    Science.gov (United States)

    Jiang, Mingfeng; Jiang, Shanshan; Zhu, Lingyan; Wang, Yaming; Huang, Wenqing; Zhang, Heng

    2013-01-01

    The typical inverse ECG problem is to noninvasively reconstruct the transmembrane potentials (TMPs) from body surface potentials (BSPs). In the study, the inverse ECG problem can be treated as a regression problem with multi-inputs (body surface potentials) and multi-outputs (transmembrane potentials), which can be solved by the support vector regression (SVR) method. In order to obtain an effective SVR model with optimal regression accuracy and generalization performance, the hyperparameters of SVR must be set carefully. Three different optimization methods, that is, genetic algorithm (GA), differential evolution (DE) algorithm, and particle swarm optimization (PSO), are proposed to determine optimal hyperparameters of the SVR model. In this paper, we attempt to investigate which one is the most effective way in reconstructing the cardiac TMPs from BSPs, and a full comparison of their performances is also provided. The experimental results show that these three optimization methods are well performed in finding the proper parameters of SVR and can yield good generalization performance in solving the inverse ECG problem. Moreover, compared with DE and GA, PSO algorithm is more efficient in parameters optimization and performs better in solving the inverse ECG problem, leading to a more accurate reconstruction of the TMPs. PMID:23983808

  16. Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)

    Science.gov (United States)

    Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.

    2015-12-01

    Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.

  17. Parameter uncertainties in the design and optimization of cantilever piezoelectric energy harvesters

    Science.gov (United States)

    Franco, V. R.; Varoto, P. S.

    2017-09-01

    A crucial issue in piezoelectric energy harvesting is the efficiency of the mechanical to electrical conversion process. Several techniques have been investigated in order to obtain a set of optimum design parameters that will lead to the best performance of the harvester in terms of electrical power generation. Once an optimum design is reached it is also important to consider uncertainties in the selected parameters that in turn can lead to loss of performance in the energy conversion process. The main goal of this paper is to perform a comprehensive discussion of the effects of multi-parameter aleatory uncertainties on the performance and design optimization of a given energy harvesting system. For that, a typical energy harvester consisting of a cantilever beam carrying a tip mass and partially covered by piezoelectric layers on top and bottom surfaces is considered. A distributed parameter electromechanical modal of the harvesting system is formulated and validated through experimental tests. First, the SQP (Sequential Quadratic Planning) optimization is employed to obtain an optimum set of parameters that will lead to best performance of the harvester. Second, once the optimum harvester configuration is found random perturbations are introduced in the key parameters and Monte Carlo simulations are performed to investigate how these uncertainties propagate and affect the performance of the device studied. Numerically simulated results indicate that small variations in some design parameters can cause a significant variation in the output electrical power, what strongly suggests that uncertainties must be accounted for in the design of beam energy harvesting systems.

  18. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  19. Optimization of injection molding process parameters for a plastic cell phone housing component

    Science.gov (United States)

    Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya

    2016-11-01

    To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.

  20. Sensitivity of Calibrated Parameters and Water Resource Estimates on Different Objective Functions and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Delaram Houshmand Kouchi

    2017-05-01

    Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.

  1. Model Optimization Identification Method Based on Closed-loop Operation Data and Process Characteristics Parameters

    Directory of Open Access Journals (Sweden)

    Zhiqiang GENG

    2014-01-01

    Full Text Available Output noise is strongly related to input in closed-loop control system, which makes model identification of closed-loop difficult, even unidentified in practice. The forward channel model is chosen to isolate disturbance from the output noise to input, and identified by optimization the dynamic characteristics of the process based on closed-loop operation data. The characteristics parameters of the process, such as dead time and time constant, are calculated and estimated based on the PI/PID controller parameters and closed-loop process input/output data. And those characteristics parameters are adopted to define the search space of the optimization identification algorithm. PSO-SQP optimization algorithm is applied to integrate the global search ability of PSO with the local search ability of SQP to identify the model parameters of forward channel. The validity of proposed method has been verified by the simulation. The practicability is checked with the PI/PID controller parameter turning based on identified forward channel model.

  2. Multi-parameter geometrical scaledown study for energy optimization of MTJ and related spintronics nanodevices

    Science.gov (United States)

    Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.

    The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.

  3. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    Science.gov (United States)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  4. Application-Oriented Optimal Shift Schedule Extraction for a Dual-Motor Electric Bus with Automated Manual Transmission

    Directory of Open Access Journals (Sweden)

    Mingjie Zhao

    2018-02-01

    Full Text Available The conventional battery electric buses (BEBs have limited potential to optimize the energy consumption and reach a better dynamic performance. A practical dual-motor equipped with 4-speed Automated Manual Transmission (AMT propulsion system is proposed, which can eliminate the traction interruption in conventional AMT. A discrete model of the dual-motor-AMT electric bus (DMAEB is built and used to optimize the gear shift schedule. Dynamic programming (DP algorithm is applied to find the optimal results where the efficiency and shift time of each gear are considered to handle the application problem of global optimization. A rational penalty factor and a proper shift time delay based on bench test results are set to reduce the shift frequency by 82.5% in Chinese-World Transient Vehicle Cycle (C-WTVC. Two perspectives of applicable shift rule extraction methods, i.e., the classification method based on optimal operating points and clustering method based on optimal shifting points, are explored and compared. Eventually, the hardware-in-the-loop (HIL simulation results demonstrate that the proposed structure and extracted shift schedule can realize a significant improvement in reducing energy loss by 20.13% compared to traditional empirical strategies.

  5. Effects of AV-delay optimization on hemodynamic parameters in patients with VDD pacemakers.

    Science.gov (United States)

    Krychtiuk, Konstantin A; Nürnberg, Michael; Volker, Romana; Pachinger, Linda; Jarai, Rudolf; Freynhofer, Matthias K; Wojta, Johann; Huber, Kurt; Weiss, Thomas W

    2014-05-01

    Atrioventricular (AV) delay optimization improves hemodynamics and clinical parameters in patients treated with cardiac resynchronization therapy and dual-chamber-pacemakers (PM). However, data on optimizing AV delay in patients treated with VDD-PMs are scarce. We, therefore, investigated the acute and chronic effects of AV delay optimization on hemodynamics in patients treated with VDD-PMs due to AV-conduction disturbances. In this prospective, single-center interventional trial, we included 64 patients (38 men, 26 women, median age: 77 (70-82) years) with implanted VDD-PM. AV-delay optimization was performed using a formula based on the surface electrocardiogram (ECG). Hemodynamic parameters (stroke volume (SV), cardiac output (CO), heart rate (HR), and blood pressure (BP)) were measured at baseline and follow-up after 3 months using impedance cardiography. Using an ECG formula for AV-delay optimization, the AV interval was decreased from 180 (180-180) to 75 (75-100) ms. At baseline, AV-delay optimization led to a significant increase of both SV (71.3 ± 15.8 vs. 55.3 ± 12.7 ml, p AV delay vs. nominal AV interval, respectively) and CO (5.1 ± 1.4 vs. 3.9 ± 1.0 l/min, p AV-delay optimization in patients treated with VDD-PMs exhibits immediate beneficial effects on hemodynamic parameters that are sustained for 3 months.

  6. An optimal estimation algorithm to derive Ice and Ocean parameters from AMSR Microwave radiometer observations

    DEFF Research Database (Denmark)

    Pedersen, Leif Toudal; Tonboe, Rasmus T.; Høyer, Jacob

    .e. horizontal and vertical polarization at channels between 6 and 89 GHz as a function of a limited set of physical parameters, i.e. atmospheric water vapor, cloud liquid water, wind speed, surface and air temperature. This type of model is ideal for optimal estimation applications because of its limited set...... channels as well as the combination of data from multiple sources such as microwave radiometry, scatterometry and numerical weather prediction. Optimal estimation is data assimilation without a numerical model for retrieving physical parameters from remote sensing using a multitude of available information....... The methodology is observation driven and model innovation is limited to the translation between observation space and physical parameter space Over open water we use a semi-empirical radiative transfer model developed by Meissner & Wentz that estimates the multispectral AMSR brightness temperatures, i...

  7. Crosstalk of High-precision Optical Pickup Actuator with Optimal Structure Parameters

    Directory of Open Access Journals (Sweden)

    Qingxi JIA

    2014-02-01

    Full Text Available he crosstalk characteristic is a key factor that affects the pickup actuator dynamic property and consequently the accuracy of reading and writing operation in the future ultra-high density optical storage systems. In this paper, the actuator spatial magnetic field distribution model is first established. Then the crosstalk movement phenomenon of the actuator is analyzed and simulated in CST software based on FDTD principle. Moreover, the crosstalk degree in both tracking and focusing directions are defined with respect to the produced crosstalk forces. The relationship between the crosstalk degree and the structure parameters of the actuator such as the height and width of the permanent magnet is analyzed. Taguchi orthogonal method is further used to obtain the optimal structure parameters. It is concluded that the crosstalk can be effectively reduced by an optimal design of the structure parameters, thereby, the dynamic performance of the actuator can be improved.

  8. Optimization of cryogenic cooled EDM process parameters using grey relational analysis

    International Nuclear Information System (INIS)

    Kumar, S Vinoth; Kumar, M Pradeep

    2014-01-01

    This paper presents an experimental investigation on cryogenic cooling of liquid nitrogen (LN 2 ) copper electrode in the electrical discharge machining (EDM) process. The optimization of the EDM process parameters, such as the electrode environment (conventional electrode and cryogenically cooled electrode in EDM), discharge current, pulse on time, gap voltage on material removal rate, electrode wear, and surface roughness on machining of AlSiCp metal matrix composite using multiple performance characteristics on grey relational analysis was investigated. The L 18 orthogonal array was utilized to examine the process parameters, and the optimal levels of the process parameters were identified through grey relational analysis. Experimental data were analyzed through analysis of variance. Scanning electron microscopy analysis was conducted to study the characteristics of the machined surface.

  9. Optimization of process parameter for graft copolymerization of glycidyl methacrylate onto delignified banana fibers

    International Nuclear Information System (INIS)

    Selambakkannu, S.; Nor Azillah Fatimah Othman; Siti Fatahiyah Mohamad

    2016-01-01

    This paper focused on pre-treated banana fibers as a trunk polymer for optimization of radiation-induced graft copolymerization process parameters. Pre-treated banana fiber was grafted with glycidyl methacrylate (GMA) via electron beam irradiation. Optimization of grafting parameters in term of grafting yield was analyzed at numerous radiation dose, monomer concentration and reaction time. Grafting yield had been calculated gravimetrically against all the process parameters. The grafting yield at 40 kGy had increases from 14 % to 22.5 % at 1 h and 24 h of reaction time respectively. Grafting yield at 1 % of GMA was about 58 % and it increases to 187 % at 3 % GMA. The grafting of GMA onto pre-treated banana fibers confirmed with the characterization using FTIR, SEM and TGA. Grafting of GMA onto pre-treated fibers was successfully carried out and it was confirmed by the results obtained via the characterization. (author)

  10. Choice of scans and optimization of instrument parameters in neutron diffraction

    International Nuclear Information System (INIS)

    Sequeira, A.

    1975-01-01

    With neutron intensities available at medium flux reactors, the study of crystal and molecular structures is now restricted to molecules having less than about 50 atoms per asymmetric unit. This limit could perhaps be extended to structures having upto about 100 atoms in the asymmetric unit if all the experimental parameters associated with the neutron diffractometer could be ideally optimized. In view of the fact that most of the structures of current biological interest fall in this category, such as the mono-, di-, and oligonucleotides, as well as small peptides, it is important that all the instrument parameters are chosen so as to stretch the power of a given neutron source to its limit. Some ways of optimizing the various instrument parameters in order to obtain the maximum neutron intensity at a given resolution are discussed. The small effects of vertical divergences on the resolution are ignored

  11. Optimization of marine biogeochemial parameters against climatologies of nutrients and oxygen

    Science.gov (United States)

    Kriest, Iris; Khatiwala, Samar; Sauerland, Volkmar; Oschlies, Andreas

    2017-04-01

    Global biogeochemical ocean models usually contain a variety of different biogeochemical components, which are described by many parameters. The values of many of these parameters are empirically difficult to constrain, due to the fact that in the models they represent processes for different groups of organisms. Therefore, these models are subject to a high level of parametric uncertainty. This may be of consequence for their skill with respect to accurately describing the relevant features of the present ocean, as well as their sensitivity to possible environmental changes. We here present a framework for the optimization of global biogeochemical ocean models on short and long time scales. The framework combines an offline approach for transport of biogeochemical tracers with an Estimation of Distribution Algorithm, a type of evolutionary algorithm in which the probability distribution is parameterized. We explore the performance and capability of this framework by optimizations of different biogeochemical parameters against different data sets. Optimization of six parameters, mostly tied to the surface biogeochemical processes, against a climatology of observations of annual mean dissolved nutrients and oxygen, reveals that parameters, that act on large spatial and temporal scales are determined earliest, and with the least spread. Parameters more closely tied to surface biology, which act on shorter time scales, are more difficult to determine. Encouragingly, optimized models show a better fit to estimates of global mean biogeochemical fluxes such as production, export, and grazing, although these fluxes did not enter the misfit function. We finally investigate if, and to what extent, we can achieve an equally good fit to observed tracer fields with a model of strongly reduced biogeochemical complexity.

  12. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  13. An improved swarm optimization for parameter estimation and biological model selection.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete

  14. Optimal allocation of sensors for state estimation of distributed parameter systems

    International Nuclear Information System (INIS)

    Sunahara, Yoshifumi; Ohsumi, Akira; Mogami, Yoshio.

    1978-01-01

    The purpose of this paper is to present a method for finding the optimal allocation of sensors for state estimation of linear distributed parameter systems. This method is based on the criterion that the error covariance associated with the state estimate becomes minimal with respect to the allocation of the sensors. A theorem is established, giving the sufficient condition for optimizing the allocation of sensors to make minimal the error covariance approximated by a modal expansion. The remainder of this paper is devoted to illustrate important phases of the general theory of the optimal measurement allocation problem. To do this, several examples are demonstrated, including extensive discussions on the mutual relation between the optimal allocation and the dynamics of sensors. (author)

  15. Dermatology Disease Prediction Based on Two Step Cascade Genetic Algorithm Optimization of ANFIS Parameters.

    Science.gov (United States)

    Avdagic, Aja; Begic Fazlic, Lejla

    2017-01-01

    The aim of this study is to present novel algorithms for prediction of dermatological disease using only dermatological clinical features and diagnoses collected in real conditions. A combination of the Adaptive Neuro-Fuzzy Inference Systems (ANFIS) and Genetic algorithm (GA) for ANFIS subtractive clustering parameter optimization has been suggested for the first level of fuzzy model optimization. After that, a genetic optimized ANFIS fuzzy structure is used as input in GA for the second level of fuzzy model optimization. We used double 2-fold Cross validation for generating different validation sets for model improvements. Our approach is performed in the MATLAB environment. We compared results with the other studies. The results confirm that the proposed model achieves accuracy rates which are higher than the one with the previous model.

  16. OPTIMIZATION OF TRANSESTERIFICATION PARAMETERS FOR OPTIMAL BIODIESEL YIELD FROM CRUDE JATROPHA OIL USING A NEWLY SYNTHESIZED SEASHELL CATALYST

    Directory of Open Access Journals (Sweden)

    A. N. R. REDDY

    2017-10-01

    Full Text Available Heterogeneous catalysts are promising catalysts for optimal biodiesel yield from transesterification of vegetable oils. In this work calcium oxide (CaO heterogeneous catalyst was synthesized from Polymedosa erosa seashell. Calcination was carried out at 900ºC for 2h and characterized using Fourier transform infrared spectroscopy. Catalytic efficiency of CaO was testified in transesterification of crude Jatropha oil (CJO. A response surface methodology (RSM based on five-level-two-factor central composite design (CCD was employed to optimize two critical transesterification parameters catalyst concentration to pretreated CJO (0.01-0.03 w/w % and the reaction time (90 min - 150 min. A JB yield of 96.48% was estimated at 0.023 w/w% catalyst and 125.76 min reaction using response optimizer. The legitimacy of the predicted model was verified through the experiments. The validation experiments conformed a yield of JB 96.4%±0.01% as optimal at 0.023 w/w% catalyst to pretreated oil ratio and 126 min reaction time.

  17. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    Science.gov (United States)

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  18. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-01-01

    Full Text Available Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO based support vector machine (SVM classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR method with a pseudorandom binary sequence (PRBS stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  19. Automated detection of sleep apnea from electrocardiogram signals using nonlinear parameters

    International Nuclear Information System (INIS)

    Acharya, U Rajendra; Faust, Oliver; Chua, Eric Chern-Pin; Lim, Teik-Cheng; Lim, Liang Feng Benjamin

    2011-01-01

    Sleep apnoea is a very common sleep disorder which can cause symptoms such as daytime sleepiness, irritability and poor concentration. To monitor patients with this sleeping disorder we measured the electrical activity of the heart. The resulting electrocardiography (ECG) signals are both non-stationary and nonlinear. Therefore, we used nonlinear parameters such as approximate entropy, fractal dimension, correlation dimension, largest Lyapunov exponent and Hurst exponent to extract physiological information. This information was used to train an artificial neural network (ANN) classifier to categorize ECG signal segments into one of the following groups: apnoea, hypopnoea and normal breathing. ANN classification tests produced an average classification accuracy of 90%; specificity and sensitivity were 100% and 95%, respectively. We have also proposed unique recurrence plots for the normal, hypopnea and apnea classes. Detecting sleep apnea with this level of accuracy can potentially reduce the need of polysomnography (PSG). This brings advantages to patients, because the proposed system is less cumbersome when compared to PSG

  20. A Framework for Automated Database Tuning Using Dynamic SGA Parameters and Basic Operating System Utilities

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2012-12-01

    Full Text Available In present scenario the manual work (Done by Human cost more to an organization than the automatic work ( Done by Machineand the ratio is increasing day by day as per the tremendous increment in Machine (Hardware + Software Intelligence. We are moving towards the world where the Machines will be able to perform better than today by their own intelligence. They will adjust themselves as per the customer’s performance need. But to make this dream true, lots of human efforts (Theoretical and Practical are needed to increase the capability of Machines to take their own decision and make the future free from manual work and reduce the working cost. Our life is covered with the different types of systems working around. The information system is one of them. All businesses are having the base by this system. So there is the most preference job of the IT researcher to make the Information system self-Manageable. The Development of well-established frameworks are needed to made them Auto-tuned is the basic need of the current business. The DBMS vendors are also providing the Auto-Tune packages with their DBMS Application. But they charge for these Auto-Tune packages. This extra cost of packages can be eliminated by using some basic Operating system utilities (e.g. VB Script, Task Scheduler, Batch Files, and Graphical Utility etc.. We have designed a working framework for Automatic Tuning of DBMS by using the Basic Utilities of Operating System (e.g. Windows .These utilities will collect the statistics of SGA dynamic Parameters. The Framework will automatically analyze these SGA Parameter statistics and give suggestions fordiagnose the problem. In this paper we have presented that framework with practical Implementation.

  1. Optimized and Automated Radiosynthesis of [18F]DHMT for Translational Imaging of Reactive Oxygen Species with Positron Emission Tomography

    Directory of Open Access Journals (Sweden)

    Wenjie Zhang

    2016-12-01

    Full Text Available Reactive oxygen species (ROS play important roles in cell signaling and homeostasis. However, an abnormally high level of ROS is toxic, and is implicated in a number of diseases. Positron emission tomography (PET imaging of ROS can assist in the detection of these diseases. For the purpose of clinical translation of [18F]6-(4-((1-(2-fluoroethyl-1H-1,2,3-triazol-4-ylmethoxyphenyl-5-methyl-5,6-dihydrophenanthridine-3,8-diamine ([18F]DHMT, a promising ROS PET radiotracer, we first manually optimized the large-scale radiosynthesis conditions and then implemented them in an automated synthesis module. Our manual synthesis procedure afforded [18F]DHMT in 120 min with overall radiochemical yield (RCY of 31.6% ± 9.3% (n = 2, decay-uncorrected and specific activity of 426 ± 272 GBq/µmol (n = 2. Fully automated radiosynthesis of [18F]DHMT was achieved within 77 min with overall isolated RCY of 6.9% ± 2.8% (n = 7, decay-uncorrected and specific activity of 155 ± 153 GBq/µmol (n = 7 at the end of synthesis. This study is the first demonstration of producing 2-[18F]fluoroethyl azide by an automated module, which can be used for a variety of PET tracers through click chemistry. It is also the first time that [18F]DHMT was successfully tested for PET imaging in a healthy beagle dog.

  2. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  3. Process Parameter Optimization for Wobbling Laser Spot Welding of Ti6Al4V Alloy

    Science.gov (United States)

    Vakili-Farahani, F.; Lungershausen, J.; Wasmer, K.

    Laser beam welding (LBW) coupled with "wobble effect" (fast oscillation of the laser beam) is very promising for high precision micro-joining industry. For this process, similarly to the conventional LBW, the laser welding process parameters play a very significant role in determining the quality of a weld joint. Consequently, four process parameters (laser power, wobble frequency, number of rotations within a single laser pulse and focused position) and 5 responses (penetration, width, heat affected zone (HAZ), area of the fusion zone, area of HAZ and hardness) were investigated for spot welding of Ti6Al4V alloy (grade 5) using a design of experiments (DoE) approach. This paper presents experimental results showing the effects of variating the considered most important process parameters on the spot weld quality of Ti6Al4V alloy. Semi-empirical mathematical models were developed to correlate laser welding parameters to each of the measured weld responses. Adequacies of the models were then examined by various methods such as ANOVA. These models not only allows a better understanding of the wobble laser welding process and predict the process performance but also determines optimal process parameters. Therefore, optimal combination of process parameters was determined considering certain quality criteria set.

  4. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    Science.gov (United States)

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the

  5. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  6. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Science.gov (United States)

    2011-01-01

    Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These

  7. Parameters-tuning of PID controller for automatic voltage regulators using the African buffalo optimization

    Science.gov (United States)

    Mohmad Kahar, Mohd Nizam; Noraziah, A.

    2017-01-01

    In this paper, an attempt is made to apply the African Buffalo Optimization (ABO) to tune the parameters of a PID controller for an effective Automatic Voltage Regulator (AVR). Existing metaheuristic tuning methods have been proven to be quite successful but there were observable areas that need improvements especially in terms of the system’s gain overshoot and steady steady state errors. Using the ABO algorithm where each buffalo location in the herd is a candidate solution to the Proportional-Integral-Derivative parameters was very helpful in addressing these two areas of concern. The encouraging results obtained from the simulation of the PID Controller parameters-tuning using the ABO when compared with the performance of Genetic Algorithm PID (GA-PID), Particle-Swarm Optimization PID (PSO-PID), Ant Colony Optimization PID (ACO-PID), PID, Bacteria-Foraging Optimization PID (BFO-PID) etc makes ABO-PID a good addition to solving PID Controller tuning problems using metaheuristics. PMID:28441390

  8. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    Science.gov (United States)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  9. Guiding automated NMR structure determination using a global optimization metric, the NMR DP score

    International Nuclear Information System (INIS)

    Huang, Yuanpeng Janet; Mao, Binchen; Xu, Fei; Montelione, Gaetano T.

    2015-01-01

    ASDP is an automated NMR NOE assignment program. It uses a distinct bottom-up topology-constrained network anchoring approach for NOE interpretation, with 2D, 3D and/or 4D NOESY peak lists and resonance assignments as input, and generates unambiguous NOE constraints for iterative structure calculations. ASDP is designed to function interactively with various structure determination programs that use distance restraints to generate molecular models. In the CASD–NMR project, ASDP was tested and further developed using blinded NMR data, including resonance assignments, either raw or manually-curated (refined) NOESY peak list data, and in some cases 15 N– 1 H residual dipolar coupling data. In these blinded tests, in which the reference structure was not available until after structures were generated, the fully-automated ASDP program performed very well on all targets using both the raw and refined NOESY peak list data. Improvements of ASDP relative to its predecessor program for automated NOESY peak assignments, AutoStructure, were driven by challenges provided by these CASD–NMR data. These algorithmic improvements include (1) using a global metric of structural accuracy, the discriminating power score, for guiding model selection during the iterative NOE interpretation process, and (2) identifying incorrect NOESY cross peak assignments caused by errors in the NMR resonance assignment list. These improvements provide a more robust automated NOESY analysis program, ASDP, with the unique capability of being utilized with alternative structure generation and refinement programs including CYANA, CNS, and/or Rosetta

  10. Multiobjective Optimization of Precision Forging Process Parameters Based on Response Surface Method

    Directory of Open Access Journals (Sweden)

    Fayuan Zhu

    2015-01-01

    Full Text Available In order to control the precision forging forming quality and improve the service life of die, a multiobjective optimization method for process parameters design was presented by applying Latin hypercube design method and response surface model approach. Meanwhile the deformation homogeneity and material damage of forging parts were proposed for evaluating the forming quality. The forming load of die was proposed for evaluating the service life of die. Then as a case of study, the radial precision forging for a hollow shaft with variable cross section and wall thickness was carried out. The 3D rigid-plastic finite element (FE model of the hollow shaft radial precision forging was established. The multiobjective optimization forecast model was established by adopting finite element results and response surface methodology. Nondominated sorting genetic algorithm-II (NSGA-II was adopted to obtain the Pareto-optimal solutions. A compromise solution was selected from the Pareto solutions by using the mapping method. In the finite element study on the forming quality of forging parts and the service life of dies by multiobjective optimization process parameters, the feasibility of the multiobjective optimization method presented by this work was verified.

  11. GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.

    Science.gov (United States)

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-10-24

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such

  12. An intelligent approach to optimize the EDM process parameters using utility concept and QPSO algorithm

    Directory of Open Access Journals (Sweden)

    Chinmaya P. Mohanty

    2017-04-01

    Full Text Available Although significant research has gone into the field of electrical discharge machining (EDM, analysis related to the machining efficiency of the process with different electrodes has not been adequately made. Copper and brass are frequently used as electrode materials but graphite can be used as a potential electrode material due to its high melting point temperature and good electrical conductivity. In view of this, the present work attempts to compare the machinability of copper, graphite and brass electrodes while machining Inconel 718 super alloy. Taguchi’s L27 orthogonal array has been employed to collect data for the study and analyze effect of machining parameters on performance measures. The important performance measures selected for this study are material removal rate, tool wear rate, surface roughness and radial overcut. Machining parameters considered for analysis are open circuit voltage, discharge current, pulse-on-time, duty factor, flushing pressure and electrode material. From the experimental analysis, it is observed that electrode material, discharge current and pulse-on-time are the important parameters for all the performance measures. Utility concept has been implemented to transform a multiple performance characteristics into an equivalent performance characteristic. Non-linear regression analysis is carried out to develop a model relating process parameters and overall utility index. Finally, the quantum behaved particle swarm optimization (QPSO and particle swarm optimization (PSO algorithms have been used to compare the optimal level of cutting parameters. Results demonstrate the elegance of QPSO in terms of convergence and computational effort. The optimal parametric setting obtained through both the approaches is validated by conducting confirmation experiments.

  13. Prediction and optimization of friction welding parameters for super duplex stainless steel (UNS S32760) joints

    International Nuclear Information System (INIS)

    Udayakumar, T.; Raja, K.; Afsal Husain, T.M.; Sathiya, P.

    2014-01-01

    Highlights: • Corrosion resistance and impact strength – predicted by response surface methodology. • Burn off length has highest significance on corrosion resistance. • Friction force is a strong determinant in changing impact strength. • Pareto front points generated by genetic algorithm aid to fix input control variable. • Pareto front will be a trade-off between corrosion resistance and impact strength. - Abstract: Friction welding finds widespread industrial use as a mass production process for joining materials. Friction welding process allows welding of several materials that are extremely difficult to fusion weld. Friction welding process parameters play a significant role in making good quality joints. To produce a good quality joint it is important to set up proper welding process parameters. This can be done by employing optimization techniques. This paper presents a multi objective optimization method for optimizing the process parameters during friction welding process. The proposed method combines the response surface methodology (RSM) with an intelligent optimization algorithm, i.e. genetic algorithm (GA). Corrosion resistance and impact strength of friction welded super duplex stainless steel (SDSS) (UNS S32760) joints were investigated considering three process parameters: friction force (F), upset force (U) and burn off length (B). Mathematical models were developed and the responses were adequately predicted. Direct and interaction effects of process parameters on responses were studied by plotting graphs. Burn off length has high significance on corrosion current followed by upset force and friction force. In the case of impact strength, friction force has high significance followed by upset force and burn off length. Multi objective optimization for maximizing the impact strength and minimizing the corrosion current (maximizing corrosion resistance) was carried out using GA with the RSM model. The optimization procedure resulted in

  14. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    Science.gov (United States)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations

  15. Automated Soil Physical Parameter Assessment Using Smartphone and Digital Camera Imagery

    Directory of Open Access Journals (Sweden)

    Matt Aitkenhead

    2016-12-01

    Full Text Available Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera to estimate the structure, texture and drainage of the soil. The method is adapted from earlier work on developing smartphone apps for estimating topsoil organic matter content in Scotland and uses an existing visual soil structure assessment approach. Colour and image texture information was extracted from the imagery. This information was linked, using geolocation information derived from the smartphone GPS system or from field notes, with existing collections of topography, land cover, soil and climate data for Scotland. A neural network model was developed that was capable of estimating soil structure (on a five-point scale, soil texture (sand, silt, clay, bulk density, pH and drainage category using this information. The model is sufficiently accurate to provide estimates of these parameters from soils in the field. We discuss potential improvements to the approach and plans to integrate the model into a set of smartphone apps for estimating health and fertility indicators for Scottish soils.

  16. Optimization of the Process Parameters for Controlling Residual Stress and Distortion in Friction Stir Welding

    DEFF Research Database (Denmark)

    Tutum, Cem Celal; Schmidt, Henrik Nikolaj Blicher; Hattel, Jesper Henri

    2008-01-01

    In the present paper, numerical optimization of the process parameters, i.e. tool rotation speed and traverse speed, aiming minimization of the two conflicting objectives, i.e. the residual stresses and welding time, subjected to process-specific thermal constraints in friction stir welding......, is investigated. The welding process is simulated in 2-dimensions with a sequentially coupled transient thermo-mechanical model using ANSYS. The numerical optimization problem is implemented in modeFRONTIER and solved using the Multi-Objective Genetic Algorithm (MOGA-II). An engineering-wise evaluation or ranking...

  17. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  18. Simultaneous parameter and tolerance optimization of structures via probability-interval mixed reliability model

    DEFF Research Database (Denmark)

    Luo, Yangjun; Wu, Xiaoxiang; Zhou, Mingdong

    2015-01-01

    on a probability-interval mixed reliability model, the imprecision of design parameters is modeled as interval uncertainties fluctuating within allowable tolerance bounds. The optimization model is defined as to minimize the total manufacturing cost under mixed reliability index constraints, which are further...... transformed into their equivalent formulations by using the performance measure approach. The optimization problem is then solved with the sequential approximate programming. Meanwhile, a numerically stable algorithm based on the trust region method is proposed to efficiently update the target performance...

  19. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  20. The optimization of the nonlinear parameters in the transcorrelated method: the hydrogen molecule

    International Nuclear Information System (INIS)

    Huggett, J.P.; Armour, E.A.G.

    1976-01-01

    The nonlinear parameters in a transcorrelated calculation of the groundstate energy and wavefunction of the hydrogen molecule are optimized using the method of Boys and Handy (Proc. R. Soc. A.; 309:195 and 209, 310:43 and 63, 311:309 (1969)). The method gives quite accurate results in all cases and in some cases the results are highly accurate. This is the first time the method has been applied to the optimization of a term in the correlation function which depends linearly on the interelectronic distance. (author)

  1. Fully automated segmentation of a hip joint using the patient-specific optimal thresholding and watershed algorithm.

    Science.gov (United States)

    Kim, Jung Jin; Nam, Jimin; Jang, In Gwun

    2018-02-01

    Automated segmentation with high accuracy and speed is a prerequisite for FEA-based quantitative assessment with a large population. However, hip joint segmentation has remained challenging due to a narrow articular cartilage and thin cortical bone with a marked interindividual variance. To overcome this challenge, this paper proposes a fully automated segmentation method for a hip joint that uses the complementary characteristics between the thresholding technique and the watershed algorithm. Using the golden section method and load path algorithm, the proposed method first determines the patient-specific optimal threshold value that enables reliably separating a femur from a pelvis while removing cortical and trabecular bone in the femur at the minimum. This provides regional information on the femur. The watershed algorithm is then used to obtain boundary information on the femur. The proximal femur can be extracted by merging the complementary information on a target image. For eight CT images, compared with the manual segmentation and other segmentation methods, the proposed method offers a high accuracy in terms of the dice overlap coefficient (97.24 ± 0.44%) and average surface distance (0.36 ± 0.07 mm) within a fast timeframe in terms of processing time per slice (1.25 ± 0.27 s). The proposed method also delivers structural behavior which is close to that of the manual segmentation with a small mean of average relative errors of the risk factor (4.99%). The segmentation results show that, without the aid of a prerequisite dataset and users' manual intervention, the proposed method can segment a hip joint as fast as the simplified Kang (SK)-based automated segmentation, while maintaining the segmentation accuracy at a similar level of the snake-based semi-automated segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    Science.gov (United States)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  3. Optimization of soil hydraulic model parameters using Synthetic Aperture Radar data: an integrated multidisciplinary approach

    Science.gov (United States)

    Mattia, F.; Pauwels, V. R.; Balenzano, A.; Satalino, G.; Skriver, H.; Verhoest, N. E.

    2008-12-01

    It is widely recognized that Synthetic Aperture Radar (SAR) data are a very valuable source of information for the modeling of the interactions between the land surface and the atmosphere. During the last couple of decades, most of the research on the use of SAR data in hydrologic applications has been focused on the retrieval of land and bio-geophysical parameters (e.g. soil moisture contents). One relatively unexplored issue consists of the optimization of soil hydraulic model parameters, such as for example hydraulic conductivity values, through remote sensing. This is due to the fact that no direct relationships between the remote sensing observations, more specifically radar backscatter values, and the parameter values can be derived. However, land surface models can provide these relationships. The objective of this study is to retrieve a number of soil physical model parameters through a combination of remote sensing and land surface modeling. Spatially distributed and multitemporal SAR-based soil moisture maps are the basis of the study. The surface soil moisture values are used in a parameter estimation procedure based on the Extended Kalman Filter equations. In fact, the land surface model is thus used to determine the relationship between the soil physical parameters and the remote sensing data. An analysis is then performed, relating the retrieved soil parameters to the soil texture data available over the study area. The results of the study show that there is a potential to retrieve soil physical model parameters through a combination of land surface modeling and remote sensing.

  4. OPTESIM, a versatile toolbox for numerical simulation of electron spin echo envelope modulation (ESEEM) that features hybrid optimization and statistical assessment of parameters

    Science.gov (United States)

    Sun, Li; Hernandez-Guzman, Jessica; Warncke, Kurt

    2009-09-01

    Electron spin echo envelope modulation (ESEEM) is a technique of pulsed-electron paramagnetic resonance (EPR) spectroscopy. The analyis of ESEEM data to extract information about the nuclear and electronic structure of a disordered (powder) paramagnetic system requires accurate and efficient numerical simulations. A single coupled nucleus of known nuclear g value (gN) and spin I = 1 can have up to eight adjustable parameters in the nuclear part of the spin Hamiltonian. We have developed OPTESIM, an ESEEM simulation toolbox, for automated numerical simulation of powder two- and three-pulse one-dimensional ESEEM for arbitrary number (N) and type (I, gN) of coupled nuclei, and arbitrary mutual orientations of the hyperfine tensor principal axis systems for N > 1. OPTESIM is based in the Matlab environment, and includes the following features: (1) a fast algorithm for translation of the spin Hamiltonian into simulated ESEEM, (2) different optimization methods that can be hybridized to achieve an efficient coarse-to-fine grained search of the parameter space and convergence to a global minimum, (3) statistical analysis of the simulation parameters, which allows the identification of simultaneous confidence regions at specific confidence levels. OPTESIM also includes a geometry-preserving spherical averaging algorithm as default for N > 1, and global optimization over multiple experimental conditions, such as the dephasing time (τ) for three-pulse ESEEM, and external magnetic field values. Application examples for simulation of 14N coupling (N = 1, N = 2) in biological and chemical model paramagnets are included. Automated, optimized simulations by using OPTESIM lead to a convergence on dramatically shorter time scales, relative to manual simulations.

  5. IDENTIFICATION OF OPTIMAL PARAMETERS OF REINFORCED CONCRETE STRUCTURES WITH ACCOUNT FOR THE PROBABILITY OF FAILURE

    Directory of Open Access Journals (Sweden)

    Filimonova Ekaterina Aleksandrovna

    2012-10-01

    The author suggests splitting the aforementioned parameters into the two groups, namely, natural parameters and value-related parameters that are introduced to assess the costs of development, transportation, construction and operation of a structure, as well as the costs of its potential failure. The author proposes a new improved methodology for the identification of the above parameters that ensures optimal solutions to non-linear objective functions accompanied by non-linear restrictions that are critical to the design of reinforced concrete structures. Any structural failure may be interpreted as the bounce of a random process associated with the surplus bearing capacity into the negative domain. Monte Carlo numerical methods make it possible to assess these bounces into the unacc eptable domain.

  6. Parameter Identification of Static Friction Based on An Optimal Exciting Trajectory

    Science.gov (United States)

    Tu, X.; Zhao, P.; Zhou, Y. F.

    2017-12-01

    In this paper, we focus on how to improve the identification efficiency of friction parameters in a robot joint. First, the static friction model that has only linear dependencies with respect to their parameters is adopted so that the servomotor dynamics can be linearized. In this case, the traditional exciting trajectory based on Fourier series is modified by replacing the constant term with quintic polynomial to ensure the boundary continuity of speed and acceleration. Then, the Fourier-related parameters are optimized by genetic algorithm(GA) in which the condition number of regression matrix is set as the fitness function. At last, compared with the constant-velocity tracking experiment, the friction parameters from the exciting trajectory experiment has the similar result with the advantage of time reduction.

  7. Application of Powell's optimization method to surge arrester circuit models' parameters

    Energy Technology Data Exchange (ETDEWEB)

    Christodoulou, C.A.; Stathopulos, I.A. [National Technical University of Athens, School of Electrical and Computer Engineering, 9 Iroon Politechniou St., Zografou Campus, 157 80 Athens (Greece); Vita, V.; Ekonomou, L.; Chatzarakis, G.E. [A.S.PE.T.E. - School of Pedagogical and Technological Education, Department of Electrical Engineering Educators, N. Heraklion, 141 21 Athens (Greece)

    2010-08-15

    Powell's optimization method has been used for the evaluation of the surge arrester models parameters. The proper modelling of metal-oxide surge arresters and the right selection of equivalent circuit parameters are very significant issues, since quality and reliability of lightning performance studies can be improved with the more efficient representation of the arresters' dynamic behavior. The proposed approach selects optimum arrester model equivalent circuit parameter values, minimizing the error between the simulated peak residual voltage value and this given by the manufacturer. Application of the method in performed on a 120 kV metal oxide arrester. The use of the obtained optimum parameter values reduces significantly the relative error between the simulated and manufacturer's peak residual voltage value, presenting the effectiveness of the method. (author)

  8. Analysis and optimization of machining parameters of laser cutting for polypropylene composite

    Science.gov (United States)

    Deepa, A.; Padmanabhan, K.; Kuppan, P.

    2017-11-01

    Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.

  9. OPTIMIZATION OF PROCESS PARAMETERS TO MINIMIZE ANGULAR DISTORTION IN GAS TUNGSTEN ARC WELDED STAINLESS STEEL 202 GRADE PLATES USING PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    R. SUDHAKARAN

    2012-04-01

    Full Text Available This paper presents a study on optimization of process parameters using particle swarm optimization to minimize angular distortion in 202 grade stainless steel gas tungsten arc welded plates. Angular distortion is a major problem and most pronounced among different types of distortion in butt welded plates. The process control parameters chosen for the study are welding gun angle, welding speed, plate length, welding current and gas flow rate. The experiments were conducted using design of experiments technique with five factor five level central composite rotatable design with full replication technique. A mathematical model was developed correlating the process parameters with angular distortion. A source code was developed in MATLAB 7.6 to do the optimization. The optimal process parameters gave a value of 0.0305° for angular distortion which demonstrates the accuracy of the model developed. The results indicate that the optimized values for the process parameters are capable of producing weld with minimum distortion.

  10. Recurrent oligomers in proteins: an optimal scheme reconciling accurate and concise backbone representations in automated folding and design studies.

    Science.gov (United States)

    Micheletti, C; Seno, F; Maritan, A

    2000-09-01

    A novel scheme is introduced to capture the spatial correlations of consecutive amino acids in naturally occurring proteins. This knowledge-based strategy is able to carry out optimally automated subdivisions of protein fragments into classes of similarity. The goal is to provide the minimal set of protein oligomers (termed "oligons" for brevity) that is able to represent any other fragment. At variance with previous studies in which recurrent local motifs were classified, our concern is to provide simplified protein representations that have been optimised for use in automated folding and/or design attempts. In such contexts, it is paramount to limit the number of degrees of freedom per amino acid without incurring loss of accuracy of structural representations. The suggested method finds, by construction, the optimal compromise between these needs. Several possible oligon lengths are considered. It is shown that meaningful classifications cannot be done for lengths greater than six or smaller than four. Different contexts are considered for which oligons of length five or six are recommendable. With only a few dozen oligons of such length, virtually any protein can be reproduced within typical experimental uncertainties. Structural data for the oligons are made publicly available.

  11. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    Science.gov (United States)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  12. Parameter Estimation for Coupled Hydromechanical Simulation of Dynamic Compaction Based on Pareto Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2015-01-01

    Full Text Available This paper presented a parameter estimation method based on a coupled hydromechanical model of dynamic compaction and the Pareto multiobjective optimization technique. The hydromechanical model of dynamic compaction is established in the FEM program LS-DYNA. The multiobjective optimization algorithm, Nondominated Sorted Genetic Algorithm (NSGA-IIa, is integrated with the numerical model to identify soil parameters using multiple sources of field data. A field case study is used to demonstrate the capability of the proposed method. The observed pore water pressure and crater depth at early blow of dynamic compaction are simultaneously used to estimate the soil parameters. Robustness of the back estimated parameters is further illustrated by a forward prediction. Results show that the back-analyzed soil parameters can reasonably predict lateral displacements and give generally acceptable predictions of dynamic compaction for an adjacent location. In addition, for prediction of ground response of the dynamic compaction at continuous blows, the prediction based on the second blow is more accurate than the first blow due to the occurrence of the hardening and strengthening of soil during continuous compaction.

  13. Optimizing parameters of a technical system using quality function deployment method

    Science.gov (United States)

    Baczkowicz, M.; Gwiazda, A.

    2015-11-01

    The article shows the practical use of Quality Function Deployment (QFD) on the example of a mechanized mining support. Firstly it gives a short description of this method and shows how the designing process, from the constructor point of view, looks like. The proposed method allows optimizing construction parameters and comparing them as well as adapting to customer requirements. QFD helps to determine the full set of crucial construction parameters and then their importance and difficulty of their execution. Secondly it shows chosen technical system and presents its construction with figures of the existing and future optimized model. The construction parameters were selected from the designer point of view. The method helps to specify a complete set of construction parameters, from the point of view, of the designed technical system and customer requirements. The QFD matrix can be adjusted depending on designing needs and not every part of it has to be considered. Designers can choose which parts are the most important. Due to this QFD can be a very flexible tool. The most important is to define relationships occurring between parameters and that part cannot be eliminated from the analysis.

  14. Multiobjective Optimization of Injection Molding Process Parameters for the Precision Manufacturing of Plastic Optical Lens

    Directory of Open Access Journals (Sweden)

    Junhui Liu

    2017-01-01

    Full Text Available Injection molding process parameters (IMPP have a significant effect on the optical performance and surface waviness of precision plastic optical lens. This paper presents a set of procedures for the optimization of IMPP, with haze ratio (HR reflecting the optical performance and peak-to-valley 20 (PV20 reflecting the surface waviness as the optimization objectives. First, the orthogonal experiment was carried out with the Taguchi method, and the results were analyzed by ANOVA to screen out the IMPP having a significant effect on the objectives. Then, the 34 full-factor experiment was conducted on the key IMPP, and the experimental results were used as the training and testing samples. The BPNN algorithm and the M-SVR algorithm were applied to establish the mapping relationships between the IMPP and objectives. Finally, the multiple-objective optimization was performed by applying the nondominated sorting genetic algorithm (NSGA-II, with the built M-SVR models as the fitness function of the objectives, to obtain a Pareto-optimal set, which improved the quality of plastic optical lens comprehensively. Through the experimental verification on the optimization results, the mean prediction error (MPE of HR and PV20 is 7.16% and 9.78%, respectively, indicating that the optimization method has high accuracy.

  15. Optimization of Cutting Parameters on Delamination of Drilling Glass-Polyester Composites

    Directory of Open Access Journals (Sweden)

    Majid Habeeb Faidh-Allah

    2018-02-01

    Full Text Available This paper attempted to study the effect of cutting parameters (spindle speed and feed rate on delamination phenomena during the drilling glass-polyester composites. Drilling process was done by CNC machine with 10 mm diameter of high-speed steel (HSS drill bit. Taguchi technique with L16 orthogonal layout was used to analyze the effective parameters on delamination factor. The optimal experiment was no. 13 with spindle speed 1273 rpm and feed 0.05 mm/rev with minimum delamination factor 1.28.

  16. Machining parameters optimization during machining of Al/5 wt% alumina metal matrix composite by fiber laser

    Science.gov (United States)

    Ghosal, Arindam; Patil, Pravin

    2017-06-01

    This experimental work presents the study of machining parameters of Ytterbium fiber laser during machining of 5 mm thick Aluminium/5wt%Alumina-MMC (Metal Matrix Composite). Response surface methodology (RSM) is used to achieve the optimization i.e. minimize hole tapering and maximize Material Removal Rate (MRR). A mathematical model has been developed and ANOVA has been done for correlating the interactive and higher-order influences of Ytterbium fiber laser machining parameters (laser power, modulation frequency, gas pressure, wait time, pulse width) on Material Removal Rate (MRR) and hole tapering during machining process.

  17. Optimization of Squeeze Casting Parameters for 2017 A Wrought Al Alloy Using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Najib Souissi

    2014-04-01

    Full Text Available This study applies the Taguchi method to investigate the relationship between the ultimate tensile strength, hardness and process variables in a squeeze casting 2017 A wrought aluminium alloy. The effects of various casting parameters including squeeze pressure, melt temperature and die temperature were studied. Therefore, the objectives of the Taguchi method for the squeeze casting process are to establish the optimal combination of process parameters and to reduce the variation in quality between only a few experiments. The experimental results show that the squeeze pressure significantly affects the microstructure and the mechanical properties of 2017 A Al alloy.

  18. Saturne II synchroton injector parameters operation and control: computerization and optimization

    International Nuclear Information System (INIS)

    Lagniel, J.M.

    1983-01-01

    The injector control system has been studied, aiming at the beam quality improvement, the increasing of the versatility, and a better machine availability. It has been choosen to realize the three following functions: - acquisition of the principal parameters of the process, so as to control them quickly and to be warned if one of them is wrong (monitoring); - the control of those parameters, one by one or by families (starting, operating point); - the research of an optimal control (on a model or on the process itself) [fr

  19. Stepwise optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He

  20. The optimal extraction parameters and anti-diabetic activity of flavonoids from Ipomoea batatas leaf.

    Science.gov (United States)

    Li, Fenglin; Li, Qingwang; Gao, Dawei; Peng, Yong

    2009-03-07

    Extraction parameters of flavonoids from Ipomoea batatas leaf (FIBL) and anti-diabetic activity of FIBL on alloxan induced diabetic mice were studied. The optimal extraction parameters of FIBL were obtained by single factor test and orthogonal test, as follows: ethanol concentration 60 %, ratio of solvent to raw material 30, extraction temperature 75 degrees and extraction time 1.5 h, while extraction yield of FIBL was 5.94 %. FIBL treatment (50, 100, and 150 mg/ kg body weight) for 28 days resulted in a significant decrease in the concentration of fasting blood glucose (FBG), total cholesterol (TC) and triglyceride (TG) in diabetes mellitus mice. Furthermore, FIBL significantly increased body weight (bw) and serum high-density lipoprotein cholesterol (HDL-c) level. The data demonstrated FIBL at the dose of 100 mg/kg bw exhibited the optimal effect. The above results suggest that FIBL can control blood glucose and modulate the metabolism of blood lipid in diabetes mellitus mice.

  1. Performance Evaluation and Parameter Optimization of SoftCast Wireless Video Broadcast

    Directory of Open Access Journals (Sweden)

    Dongxue Yang

    2015-08-01

    Full Text Available Wireless video broadcast plays an imp ortant role in multimedia communication with the emergence of mobile video applications. However, conventional video broadcast designs suffer from a cliff effect due to separated source and channel encoding. The newly prop osed SoftCast scheme employs a cross-layer design, whose reconstructed video quality is prop ortional to the channel condition. In this pap er, we provide the p erformance evaluation and the parameter optimization of the SoftCast system. Optimization principles on parameter selection are suggested to obtain a b etter video quality, o ccupy less bandwidth and/or utilize lower complexity. In addition, we compare SoftCast with H.264 in the LTE EPA scenario. The simulation results show that SoftCast provides a b etter p erformance in the scalability to channel conditions and the robustness to packet losses.

  2. Slot Parameter Optimization for Multiband Antenna Performance Improvement Using Intelligent Systems

    Directory of Open Access Journals (Sweden)

    Erdem Demircioglu

    2015-01-01

    Full Text Available This paper discusses bandwidth enhancement for multiband microstrip patch antennas (MMPAs using symmetrical rectangular/square slots etched on the patch and the substrate properties. The slot parameters on MMPA are modeled using soft computing technique of artificial neural networks (ANN. To achieve the best ANN performance, Particle Swarm Optimization (PSO and Differential Evolution (DE are applied with ANN’s conventional training algorithm in optimization of the modeling performance. In this study, the slot parameters are assumed as slot distance to the radiating patch edge, slot width, and length. Bandwidth enhancement is applied to a formerly designed MMPA fed by a microstrip transmission line attached to the center pin of 50 ohm SMA connecter. The simulated antennas are fabricated and measured. Measurement results are utilized for training the artificial intelligence models. The ANN provides 98% model accuracy for rectangular slots and 97% for square slots; however, ANFIS offer 90% accuracy with lack of resonance frequency tracking.

  3. Multi-criteria optimization of chassis parameters of Nissan 200 SX for drifting competitions

    Science.gov (United States)

    Maniowski, M.

    2016-09-01

    The work objective is to increase performance of Nissan 200sx S13 prepared for a quasi-static state of drifting on a circular path with given constant radius (R=15 m) and tyre-road friction coefficient (μ = 0.9). First, a high fidelity “miMA” multibody model of the vehicle is formulated. Then, a multicriteria optimization problem is solved with one of the goals to maximize a stable drift angle (β) of the vehicle. The decision variables contain 11 parameters of the vehicle chassis (describing the wheel suspension stiffness and geometry) and 2 parameters responsible for a driver steering and accelerator actions, that control this extreme closed-loop manoeuvre. The optimized chassis setup results in the drift angle increase by 14% from 35 to 40 deg.

  4. A Class of Parameter Estimation Methods for Nonlinear Muskingum Model Using Hybrid Invasive Weed Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Aijia Ouyang

    2015-01-01

    Full Text Available Nonlinear Muskingum models are important tools in hydrological forecasting. In this paper, we have come up with a class of new discretization schemes including a parameter θ to approximate the nonlinear Muskingum model based on general trapezoid formulas. The accuracy of these schemes is second order, if θ≠1/3, but interestingly when θ=1/3, the accuracy of the presented scheme gets improved to third order. Then, the present schemes are transformed into an unconstrained optimization problem which can be solved by a hybrid invasive weed optimization (HIWO algorithm. Finally, a numerical example is provided to illustrate the effectiveness of the present methods. The numerical results substantiate the fact that the presented methods have better precision in estimating the parameters of nonlinear Muskingum models.

  5. Optimal relations of the parameters ensuring safety during reactor start-up

    International Nuclear Information System (INIS)

    Yurkevich, G.P.

    2004-01-01

    Procedure and equations for the determination of optimal ratio between parameters allowing safe removal of reactor in critical state are suggested. Initial pulse frequency of pulsed start-up channel and power of neutron source are decreased by reduced rate of changing reactivity during automatic start-up, disposition of pulsed neutron detector in the range with neutron flux density to 5·10 12 s -1 cm -2 at standard power, separate signal of period for the use in chains of automatic start-up and emergency protection, reduction of pulses frequency of the start-up channel (the frequency is equal to 4000 c -1 ). Procedure and equations for the determination of optimal parameters are effected with the account of statistic character of pulsed detector frequency and false outlet signal [ru

  6. Physiochemical parameters optimization for enhanced nisin production by Lactococcus lactis (MTCC 440

    Directory of Open Access Journals (Sweden)

    Puspadhwaja Mall

    2010-02-01

    Full Text Available The influence of various physiochemical parameters on the growth of Lactococcus lactis sub sp. lactis MTCC 440 was studied at shake flask level for 20 h. Media optimization (MRS broth was studied to achieve enhanced growth of the organism and also nisin production. Bioassay of nisin was done with agar diffusion method using Streptococcus agalactae NCIM 2401 as indicator strain. MRS broth (6%, w/v with 0.15μg/ml of nisin supplemented with 0.5% (v/v skimmed milk was found to be the best for nisin production as well as for growth of L lactis. The production of nisin was strongly influenced by the presence of skimmed milk and nisin in MRS broth. The production of nisin was affected by the physical parameters and maximum nisin production was at 30(0C while the optimal temperature for biomass production was 37(0C.

  7. Parameter Optimization for Enhancement of Ethanol Yield by Atmospheric Pressure DBD-Treated Saccharomyces cerevisiae

    International Nuclear Information System (INIS)

    Dong Xiaoyu; Yuan Yulian; Tang Qian; Dou Shaohua; Di Lanbo; Zhang Xiuling

    2014-01-01

    In this study, Saccharomyces cerevisiae (S. cerevisiae) was exposed to dielectric barrier discharge plasma (DBD) to improve its ethanol production capacity during fermentation. Response surface methodology (RSM) was used to optimize the discharge-associated parameters of DBD for the purpose of maximizing the ethanol yield achieved by DBD-treated S. cerevisiae. According to single factor experiments, a mathematical model was established using Box-Behnken central composite experiment design, with plasma exposure time, power supply voltage, and exposed-sample volume as impact factors and ethanol yield as the response. This was followed by response surface analysis. Optimal experimental parameters for plasma discharge-induced enhancement in ethanol yield were plasma exposure time of 1 min, power voltage of 26 V, and an exposed sample volume of 9 mL. Under these conditions, the resulting yield of ethanol was 0.48 g/g, representing an increase of 33% over control. (plasma technology)

  8. Improvement of properties of aluminosilicate pastes based on optimization of curing parameters

    Science.gov (United States)

    Kočí, Václav; Rovnaníková, Pavla; Černý, Robert

    2017-07-01

    Alkali-activated binders represent a low-energy alternative to traditional binders based on lime or cement. In this paper, a new binder of this type is designed and the influence of curing parameters on its mechanical properties, namely 7-days compressive strength, is investigated. The curing parameters include the curing temperature and the period of exposure. To maximize the compressive strength of the binder, simplex optimization procedure is applied in order to demonstrate its applicability for this research. The preliminary results indicate that the procedure is able to reach positive results as the compressive strength is found to increase by ˜11 %. As this improvement is achieved already after the first optimization step, it can be concluded that this approach has a potential to be more effective than traditional empirical design which is common in building materials engineering.

  9. Optimizing Decision Preparedness by Adapting Scenario Complexity and Automating Scenario Generation

    Science.gov (United States)

    Dunne, Rob; Schatz, Sae; Flore, Stephen M.; Nicholson, Denise

    2011-01-01

    Klein's recognition-primed decision (RPD) framework proposes that experts make decisions by recognizing similarities between current decision situations and previous decision experiences. Unfortunately, military personnel arQ often presented with situations that they have not experienced before. Scenario-based training (S8T) can help mitigate this gap. However, SBT remains a challenging and inefficient training approach. To address these limitations, the authors present an innovative formulation of scenario complexity that contributes to the larger research goal of developing an automated scenario generation system. This system will enable trainees to effectively advance through a variety of increasingly complex decision situations and experiences. By adapting scenario complexities and automating generation, trainees will be provided with a greater variety of appropriately calibrated training events, thus broadening their repositories of experience. Preliminary results from empirical testing (N=24) of the proof-of-concept formula are presented, and future avenues of scenario complexity research are also discussed.

  10. Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model

    Science.gov (United States)

    Keum, H.; Han, K.; Kim, H.; Ha, C.

    2017-12-01

    The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various

  11. Characterization of PV panel and global optimization of its model parameters using genetic algorithm

    International Nuclear Information System (INIS)

    Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.

    2013-01-01

    Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules

  12. Optimization of operational parameters and bath control for electrodeposion of Ni-Mo-B amorphous alloys

    OpenAIRE

    Marinho,Fabiano A.; Santana,François S. M.; Vasconcelos,André L. S.; Santana,Renato A. C.; Prasad,Shiva

    2002-01-01

    Optimization of operational parameters of an electrodeposition process for deposition of boron-containing amorphous metallic layer of nickel-molybdenum alloy onto a cathode from an electrolytic bath having nickel sulfate, sodium molybdate, boron phosphate, sodium citrate, sodium-1-dodecylsulfate and ammonia for pH adjustments to 9.5 has been studied. Detailed studies of the efects on bath temperature, mechanical agitation, cathode current density and anode format have led to optimum operation...

  13. Production of sintered alumina from powder; optimization of the sinterized parameters for the maximum mechanical resistence

    International Nuclear Information System (INIS)

    Rocha, J.C. da.

    1981-02-01

    Pure, sinterized alumina and the optimization of the parameters of sinterization in order to obtain the highest mechanical resistence are discussed. Test materials are sinterized from a fine powder of pure alumina (Al 2 O 3 ), α phase, at different temperatures and times, in air. The microstructures are analysed concerning porosity and grain size. Depending on the temperature or the time of sinterization, there is a maximum for the mechanical resistence. (A.R.H.) [pt

  14. Optimization of ridge parameters in multivariate generalized ridge regression by plug-in methods

    OpenAIRE

    Nagai, Isamu; Yanagihara, Hirokazu; Satoh, Kenichi

    2012-01-01

    Generalized ridge (GR) regression for an univariate linear model was proposed simultaneously with ridge regression by Hoerl and Kennard (1970). In this paper, we deal with a GR regression for a multivariate linear model, referred to as a multivariate GR (MGR) regression. From the viewpoint of reducing the mean squared error (MSE) of a predicted value, many authors have proposed several GR estimators consisting of ridge parameters optimized by non-iterative methods. By expanding...

  15. Multi-objective optimization problems concepts and self-adaptive parameters with mathematical and engineering applications

    CERN Document Server

    Lobato, Fran Sérgio

    2017-01-01

    This book is aimed at undergraduate and graduate students in applied mathematics or computer science, as a tool for solving real-world design problems. The present work covers fundamentals in multi-objective optimization and applications in mathematical and engineering system design using a new optimization strategy, namely the Self-Adaptive Multi-objective Optimization Differential Evolution (SA-MODE) algorithm. This strategy is proposed in order to reduce the number of evaluations of the objective function through dynamic update of canonical Differential Evolution parameters (population size, crossover probability and perturbation rate). The methodology is applied to solve mathematical functions considering test cases from the literature and various engineering systems design, such as cantilevered beam design, biochemical reactor, crystallization process, machine tool spindle design, rotary dryer design, among others.

  16. SVM classification model in depression recognition based on mutation PSO parameter optimization

    Directory of Open Access Journals (Sweden)

    Zhang Ming

    2017-01-01

    Full Text Available At present, the clinical diagnosis of depression is mainly through structured interviews by psychiatrists, which is lack of objective diagnostic methods, so it causes the higher rate of misdiagnosis. In this paper, a method of depression recognition based on SVM and particle swarm optimization algorithm mutation is proposed. To address on the problem that particle swarm optimization (PSO algorithm easily trap in local optima, we propose a feedback mutation PSO algorithm (FBPSO to balance the local search and global exploration ability, so that the parameters of the classification model is optimal. We compared different PSO mutation algorithms about classification accuracy for depression, and found the classification accuracy of support vector machine (SVM classifier based on feedback mutation PSO algorithm is the highest. Our study promotes important reference value for establishing auxiliary diagnostic used in depression recognition of clinical diagnosis.

  17. Model Predictive Optimal Control of a Time-Delay Distributed-Parameter Systems

    Science.gov (United States)

    Nguyen, Nhan

    2006-01-01

    This paper presents an optimal control method for a class of distributed-parameter systems governed by first order, quasilinear hyperbolic partial differential equations that arise in many physical systems. Such systems are characterized by time delays since information is transported from one state to another by wave propagation. A general closed-loop hyperbolic transport model is controlled by a boundary control embedded in a periodic boundary condition. The boundary control is subject to a nonlinear differential equation constraint that models actuator dynamics of the system. The hyperbolic equation is thus coupled with the ordinary differential equation via the boundary condition. Optimality of this coupled system is investigated using variational principles to seek an adjoint formulation of the optimal control problem. The results are then applied to implement a model predictive control design for a wind tunnel to eliminate a transport delay effect that causes a poor Mach number regulation.

  18. SU-E-T-478: IMRT Delivery Parameter Dependence of Dose-Mass Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Couto, M; Mihaylov, I [Univ Miami, Miami, FL (United States)

    2015-06-15

    Purpose: To compare DMH and DVH optimization sensitivity to changes in IMRT delivery parameters. Methods: Two lung and two head and neck (HN) cases were retrospectively optimized using DVH and DMH optimization. For both optimization approaches, changes to two parameters were studied: number of IMRT segments (5 and 10 per beam) and the minimum segment area (2 and 6 cm2). The number of beams, beam angles, and minimum MUs per segment were the same for both optimizations approaches for each patient. During optimization, doses to the organs at risk (OARs) were iteratively lowered until the standard deviation across the PTV was above ∼3.0%. For each patient DVH and DMH plans were normalized such that 95% of the PTV received the same dose. Plan quality was evaluated by dose indices (DIs), which represent the dose delivered to a certain anatomical structure volume. For the lung cases, DIs assessed included: 1% cord, 33% heart, both lungs 20% and 30%, and 50% esophagus. In the HN cases: 1% cord, 1% brainstem, left/right parotids 50%, 50% larynx, and 50% esophagus. Results: When increasing the number of segments, while keeping a small segment area (2cm2), the average percent change of all DIs for DVH/DMH optimizations for each patient were: −4.66/4.71, 3.21/3.46, −9.62/21.69 and −3.28/−7.62. For a large segment area (6cm2): −0.26/−1.46, −5.04/−1.92, −5.23/−2.19 and 4.12/19.63. Results from increasing segment area while keeping a small number of segments (5segments/beam) were: 1.41/7.90, 8.17/11.66, 0.09/33.58 and −4.83/−11.60 for each case. For large number of segments (10 segments/beam): 8.35/1.30, −0.91/5.77, 6.29/7.08 and 2.62/5.16. Conclusion: This preliminary study showed case dependent results. Changes in IMRT parameters did not show consistent DI changes for either optimization approach. A larger population of patients is warranted for such comparison.

  19. Study on Parameter Optimization Design of Drum Brake Based on Hybrid Cellular Multiobjective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2012-01-01

    Full Text Available In consideration of the significant role the brake plays in ensuring the fast and safe running of vehicles, and since the present parameter optimization design models of brake are far from the practical application, this paper proposes a multiobjective optimization model of drum brake, aiming at maximizing the braking efficiency and minimizing the volume and temperature rise of drum brake. As the commonly used optimization algorithms are of some deficiency, we present a differential evolution cellular multiobjective genetic algorithm (DECell by introducing differential evolution strategy into the canonical cellular genetic algorithm for tackling this problem. For DECell, the gained Pareto front could be as close as possible to the exact Pareto front, and also the diversity of nondominated individuals could be better maintained. The experiments on the test functions reveal that DECell is of good performance in solving high-dimension nonlinear multiobjective problems. And the results of optimizing the new brake model indicate that DECell obviously outperforms the compared popular algorithm NSGA-II concerning the number of obtained brake design parameter sets, the speed, and stability for finding them.

  20. Real-time stroke volume measurements for the optimization of cardiac resynchronization therapy parameters.

    Science.gov (United States)

    Dizon, José M; Quinn, T Alexander; Cabreriza, Santos E; Wang, Daniel; Spotnitz, Henry M; Hickey, Kathleen; Garan, Hasan

    2010-09-01

    We investigated the utility of real-time stroke volume (SV) monitoring via the arterial pulse power technique to optimize cardiac resynchronization therapy (CRT) parameters at implant and prospectively evaluated the clinical and echocardiographic results. Fifteen patients with ischaemic or non-ischaemic dilated cardiomyopathy, sinus rhythm, Class III congestive heart failure, and QRS >150 ms underwent baseline 2D echocardiogram (echo), 6 min walk distance, and quality of life (QOL) questionnaire within 1 week of implant. Following implant, 0.3 mmol lithium chloride was injected to calibrate SV via dilution curve. Atrioventricular (AV) delay (90, 120, 200 ms, baseline: atrial pacing only) and V-V delay (-80 to 80 ms in 20 ms increments) were varied every 60 s. The radial artery pulse power autocorrelation method (PulseCO algorithm, LiDCO, Ltd.) was used to monitor SV on a beat-to-beat basis (LiDCO, Ltd.). Optimal parameters were programmed and echo, 6 min walk, and QOL were repeated at 6-8 weeks post-implant. Nine patients had >5% increase in SV after optimization (Group A). Six patients had Real-time SV measurements can be used to optimize CRT at the time of implant. Improvement in SV correlates with improvement in LVEF, LVEDD, and 6 min walk, and improvement in echocardiographic dyssynchrony.

  1. Optimization and Modeling of Quadrupole Orbitrap Parameters for Sensitive Analysis toward Single-Cell Proteomics.

    Science.gov (United States)

    Sun, Bingyun; Kovatch, Jessica Rae; Badiong, Albert; Merbouh, Nabyl

    2017-10-06

    Single-cell proteomics represents a field of extremely sensitive proteomic analysis, owing to the minute amount of yet complex proteins in a single cell. Without amplification potential as of nucleic acids, single-cell mass spectrometry (MS) analysis demands special instrumentation running with optimized parameters to maximize the sensitivity and throughput for comprehensive proteomic discovery. To facilitate such analysis, we here investigated two factors critical to peptide sequencing and protein detection in shotgun proteomics, i.e. precursor ion isolation window (IW) and maximum precursor ion injection time (ITmax), on an ultrahigh-field quadrupole Orbitrap (Q-Exactive HF). Counterintuitive to the frequently used proteomic parameters for bulk samples (>100 ng), our experimental data and subsequent modeling suggested a universally optimal IW of 4.0 Th for sample quantity ranging from 100 ng to 1 ng, and a sample-quantity dependent ITmax of more than 250 ms for 1-ng samples. Compared with the benchmark condition of IW = 2.0 Th and ITmax = 50 ms, our optimization generated up to 300% increase to the detected protein groups for 1-ng samples. The additionally identified proteins allowed deeper penetration of proteome for better revealing crucial cellular functions such as signaling and cell adhesion. We hope this effort can prompt single-cell and trace proteomic analysis and enable a rational selection of MS parameters.

  2. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms.

    Science.gov (United States)

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-10-26

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient's vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%.

  3. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    Science.gov (United States)

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  4. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Baofeng Lu

    2015-06-01

    Full Text Available Two different coarse alignment algorithms for Fiber Optic Gyro (FOG Inertial Navigation System (INS based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  5. [Study of optimal parameters of scalp electroacupuncture for rehabilitation effect on children of cerebral palsy].

    Science.gov (United States)

    Jin, Bingxu; Fu, Wenjie; Li, Nuo; Xin, Zhixiong; Liu, Chen

    2018-02-12

    To analyze the effect difference of wave, intensity, time and treatment frequency by orthogonal design so as to explore the optimal parameters of scalp electroacupuncture (EA) for rehabilitation effect on children of cerebral palsy. Ninety children of cerebral palsy were assigned into 9 groups by orthogonal design, 10 cases in each one. The acupoints were bilateral excitable area, foot motor sensory area, speech two area, speech three area, balance area, and intelligent nine acupoints, including Shenting (GV 24), Sishencong (EX-HN 1), and bilateral Benshen (GB 13) and Touwei (ST 8). EA was applied at bilateral excitable area and speech three area. We designed an orthogonal experiment with four factors and three levels. We studied wave (sparse wave of 2 Hz, density wave of 100 Hz, sparse and density wave of 2 Hz /100 Hz), intensity (1 mA, 2 mA, intensity based on tolerance), time (10 min, 20 min, 30 min), frequency (once a day, once every other day, twice a week). The Gesell developmental scale was used to evaluate the developmental quotient (DQ); and gross motor function measure (GMFM), motor function before and after treatment. The optimal parameters for DQ and GMFM were 2 Hz/100 Hz, 20 min, once every other day. . The optimal parameters for cerebral palsy may be 2 Hz/100 Hz, 20 min, once every other day.

  6. Optimizing Parameters of Axial Pressure-Compounded Ultra-Low Power Impulse Turbines at Preliminary Design

    Science.gov (United States)

    Kalabukhov, D. S.; Radko, V. M.; Grigoriev, V. A.

    2018-01-01

    Ultra-low power turbine drives are used as energy sources in auxiliary power systems, energy units, terrestrial, marine, air and space transport within the confines of shaft power N td = 0.01…10 kW. In this paper we propose a new approach to the development of surrogate models for evaluating the integrated efficiency of multistage ultra-low power impulse turbine with pressure stages. This method is based on the use of existing mathematical models of ultra-low power turbine stage efficiency and mass. It has been used in a method for selecting the rational parameters of two-stage axial ultra-low power turbine. The article describes the basic features of an algorithm for two-stage turbine parameters optimization and for efficiency criteria evaluating. Pledged mathematical models are intended for use at the preliminary design of turbine drive. The optimization method was tested at preliminary design of an air starter turbine. Validation was carried out by comparing the results of optimization calculations and numerical gas-dynamic simulation in the Ansys CFX package. The results indicate a sufficient accuracy of used surrogate models for axial two-stage turbine parameters selection

  7. Biological optimization of simultaneous boost on intra-prostatic lesions (DILs): sensitivity to TCP parameters.

    Science.gov (United States)

    Azzeroni, R; Maggio, A; Fiorino, C; Mangili, P; Cozzarini, C; De Cobelli, F; Di Muzio, N G; Calandrino, R

    2013-11-01

    The aim of this investigation was to explore the potential of biological optimization in the case of simultaneous integrated boost on intra-prostatic dominant lesions (DIL) and evaluating the impact of TCP parameters uncertainty. Different combination of TCP parameters (TD50 and γ50 in the Poisson-like model), were considered for DILs and the prostate outside DILs (CTV) for 7 intermediate/high-risk prostate patients. The aim was to maximize TCP while constraining NTCPs below 5% for all organs at risk. TCP values were highly depending on the parameters used and ranged between 38.4% and 99.9%; the optimized median physical doses were in the range 94-116 Gy and 69-77 Gy for DIL and CTV respectively. TCP values were correlated with the overlap PTV-rectum and the minimum distance between rectum and DIL. In conclusion, biological optimization for selective dose escalation is feasible and suggests prescribed dose around 90-120 Gy to the DILs. The obtained result is critically depending on the assumptions concerning the higher radioresistence in the DILs. In case of very resistant clonogens into the DIL, it may be difficult to maximize TCP to acceptable levels without violating NTCP constraints. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Multi-scale/multi-level geographic object-based image analysis (MS-GEOBIA methods are becoming widely-used in remote sensing because single-scale/single-level (SS-GEOBIA methods are often unable to obtain an accurate segmentation and classification of all land use/land cover (LULC types in an image. However, there have been few comparisons between SS-GEOBIA and MS-GEOBIA approaches for the purpose of mapping a specific LULC type, so it is not well understood which is more appropriate for this task. In addition, there are few methods for automating the selection of segmentation parameters for MS-GEOBIA, while manual selection (i.e., trial-and-error approach of parameters can be quite challenging and time-consuming. In this study, we examined SS-GEOBIA and MS-GEOBIA approaches for extracting residential areas in Landsat 8 imagery, and compared naïve and parameter-optimized segmentation approaches to assess whether unsupervised segmentation parameter optimization (USPO could improve the extraction of residential areas. Our main findings were: (i the MS-GEOBIA approaches achieved higher classification accuracies than the SS-GEOBIA approach, and (ii USPO resulted in more accurate MS-GEOBIA classification results while reducing the number of segmentation levels and classification variables considerably.

  9. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    International Nuclear Information System (INIS)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-01-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of spectra

  10. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    Science.gov (United States)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-10-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the "threat" set of spectra

  11. The Parkinsonian Gait Spatiotemporal Parameters Quantified by a Single Inertial Sensor before and after Automated Mechanical Peripheral Stimulation Treatment

    Directory of Open Access Journals (Sweden)

    Ana Kleiner

    2015-01-01

    Full Text Available This study aims to evaluate the change in gait spatiotemporal parameters in subjects with Parkinson’s disease (PD before and after Automated Mechanical Peripheral Stimulation (AMPS treatment. Thirty-five subjects with PD and 35 healthy age-matched subjects took part in this study. A dedicated medical device (Gondola was used to administer the AMPS. All patients with PD were treated in off levodopa phase and their gait performances were evaluated by an inertial measurement system before and after the intervention. The one-way ANOVA for repeated measures was performed to assess the differences between pre- and post-AMPS and the one-way ANOVA to assess the differences between PD patients and the control group. Spearman’s correlations assessed the associations between patients with PD clinical status (H&Y and the percentage of improvement of the gait variables after AMPS (α<0.05 for all tests. The PD group had an improvement of 14.85% in the stride length; 14.77% in the gait velocity; and 29.91% in the gait propulsion. The correlation results showed that the higher the H&Y classification, the higher the stride length percentage of improvement. The treatment based on AMPS intervention seems to induce a better performance in the gait pattern of PD patients, mainly in intermediate and advanced stages of the condition.

  12. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer.

    Science.gov (United States)

    Douglass, John K; Wehling, Martin F

    2016-12-01

    A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.

  13. Protein standardization III: Method optimization basic principles for quantitative determination of human serum proteins on automated instruments based on turbidimetry or nephelometry.

    Science.gov (United States)

    Blirup-Jensen, S

    2001-11-01

    Quantitative protein determinations in routine laboratories are today most often carried out using automated instruments. However, slight variations in the assay principle, in the programming of the instrument or in the reagents may lead to different results. This has led to the prerequisite of method optimization and standardization. The basic principles of turbidimetry and nephelometry are discussed. The different reading principles are illustrated and investigated. Various problems are identified and a suggestion is made for an integrated, fast and convenient test system for the determination of a number of different proteins on the same instrument. An optimized test system for turbidimetry and nephelometry should comprise high-quality antibodies, calibrators, controls, and buffers and a protocol with detailed parameter settings in order to program the instrument correctly. A good user program takes full advantage of the optimal reading principles for the different instruments. This implies--for all suitable instruments--sample preincubation followed by real sample blanking, which automatically corrects for initial turbidity in the sample. Likewise it is recommended to measure the reagent blank, which represents any turbidity caused by the antibody itself. By correcting all signals with these two blank values the best possible signal is obtained for the specific analyte. An optimized test system should preferably offer a wide measuring range combined with a wide security range, which for the user means few re-runs and maximum security against antigen excess. A non-linear calibration curve based on six standards is obtained using a suitable mathematical fitting model, which normally is part of the instrument software.

  14. Diagnostic accuracy of uriSed automated urine microscopic sediment analyzer and dipstick parameters in predicting urine culture test results.

    Science.gov (United States)

    Huysal, Kağan; Budak, Yasemin U; Karaca, Ayse Ulusoy; Aydos, Murat; Kahvecioğlu, Serdar; Bulut, Mehtap; Polat, Murat

    2013-01-01

    Urinary tract infection (UTI) is one of the most common types of infection. Currently, diagnosis is primarily based on microbiologic culture, which is time- and labor-consuming. The aim of this study was to assess the diagnostic accuracy of urinalysis results from UriSed (77 Electronica, Budapest, Hungary), an automated microscopic image-based sediment analyzer, in predicting positive urine cultures. We examined a total of 384 urine specimens from hospitalized patients and outpatients attending our hospital on the same day for urinalysis, dipstick tests and semi-quantitative urine culture. The urinalysis results were compared with those of conventional semiquantitative urine culture. Of 384 urinary specimens, 68 were positive for bacteriuria by culture, and were thus considered true positives. Comparison of these results with those obtained from the UriSed analyzer indicated that the analyzer had a specificity of 91.1%, a sensitivity of 47.0%, a positive predictive value (PPV) of 53.3% (95% confidence interval (CI) = 40.8-65.3), and a negative predictive value (NPV) of 88.8% (95% CI = 85.0-91.8%). The accuracy was 83.3% when the urine leukocyte parameter was used, 76.8% when bacteriuria analysis of urinary sediment was used, and 85.1% when the bacteriuria and leukocyturia parameters were combined. The presence of nitrite was the best indicator of culture positivity (99.3% specificity) but had a negative likelihood ratio of 0.7, indicating that it was not a reliable clinical test. Although the specificity of the UriSed analyzer was within acceptable limits, the sensitivity value was low. Thus, UriSed urinalysis resuIts do not accurately predict the outcome of culture.

  15. Homogeneous Gaussian Profile P+-Type Emitters: Updated Parameters and Metal-Grid Optimization

    Directory of Open Access Journals (Sweden)

    M. Cid

    2002-10-01

    Full Text Available P+-type emitters were optimized keeping the base parameters constant. Updated internal parameters were considered. The surface recombination velocity was considered variable with the surface doping level. Passivated homogeneous emitters were found to have low emitter recombination density and high collection efficiency. A complete structure p+nn+ was analyzed, taking into account optimized shadowing and metal-contacted factors for laboratory cells as function of the surface doping level and the emitter thickness. The base parameters were kept constant to make the emitter characteristics evident. The most efficient P+-type passivated homogeneous emitters, provide efficiencies around 21% for a wide range of emitter sheet resistivity (50 -- 500 omega/ with the surface doping levels Ns=1×10(19 cm-3 and 5×10(19 cm-3. The output electrical parameters were evaluated considering the recently proposed value n i=9.65×10(9 (cm-3. A non-significant increase of 0.1% in the efficiency was obtained, validating all the conclusions obtained in this work, considering n i=1×10(10 cm-3.

  16. Multiobjective Optimization of Turning Cutting Parameters for J-Steel Material

    Directory of Open Access Journals (Sweden)

    Adel T. Abbas

    2016-01-01

    Full Text Available This paper presents a multiobjective optimization study of cutting parameters in turning operation for a heat-treated alloy steel material (J-Steel with Vickers hardness in the range of HV 365–395 using uncoated, unlubricated Tungsten-Carbide tools. The primary aim is to identify proper settings of the cutting parameters (cutting speed, feed rate, and depth of cut that lead to reasonable compromises between good surface quality and high material removal rate. Thorough exploration of the range of cutting parameters was conducted via a five-level full-factorial experimental matrix of samples and the Pareto trade-off frontier is identified. The trade-off among the objectives was observed to have a “knee” shape, in which certain settings for the cutting parameters can achieve both good surface quality and high material removal rate within certain limits. However, improving one of the objectives beyond these limits can only happen at the expense of a large compromise in the other objective. An alternative approach for identifying the trade-off frontier was also tested via multiobjective implementation of the Efficient Global Optimization (m-EGO algorithm. The m-EGO algorithm was successful in identifying two points within the good range of the trade-off frontier with 36% fewer experimental samples.

  17. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    Science.gov (United States)

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Process Parameters Optimization for Producing AA6061/TiB2 Composites by Friction Stir Processing

    Directory of Open Access Journals (Sweden)

    Rao Santha Dakarapu

    2017-04-01

    Full Text Available Friction stir processing (FSP is solid state novel technique developed to refine microstructure and to improve the mechanical properties and be used to fabricate the aluminium alloy matrix composites. An attempt is made to fabricate AA6061/TiB2 aluminium alloy composite (AMCs and the influence of process parameters like rotational speed, transverse feed, axial load and percentage reinforcement on microstructure and mechanical properties were studied. The microstructural observations are carried out and revealed that the reinforcement particles (TiB2 were uniformly dispersed in the nugget zone. The Tensile strength and Hardness of composites were evaluated. It was observed that tensile strength, and hardness were increased with increased the rotational speed and percentage reinforcement of particles. The process parameters were optimized using Taguchi analysis (Single Variable and Grey analysis (Multi Variable. The most influential parameter was rotational speed in single variable method and multi variable optimization method. The ANOVA also done to know the percentage contribution of each parameter.

  19. A parameter optimization method to determine ski stiffness properties from ski deformation data.

    Science.gov (United States)

    Heinrich, Dieter; Mössner, Martin; Kaps, Peter; Nachbauer, Werner

    2011-02-01

    The deformation of skis and the contact pressure between skis and snow are crucial factors for carved turns in alpine skiing. The purpose of the current study was to develop and to evaluate an optimization method to determine the bending and torsional stiffness that lead to a given bending and torsional deflection of the ski. Euler-Bernoulli beam theory and classical torsion theory were applied to model the deformation of the ski. Bending and torsional stiffness were approximated as linear combinations of B-splines. To compute the unknown coefficients, a parameter optimization problem was formulated and successfully solved by multiple shooting and least squares data fitting. The proposed optimization method was evaluated based on ski stiffness data and ski deformation data taken from a recently published simulation study. The ski deformation data were used as input data to the optimization method. The optimization method was capable of successfully reproducing the shape of the original bending and torsional stiffness data of the ski with a root mean square error below 1 N m2. In conclusion, the proposed computational method offers the possibility to calculate ski stiffness properties with respect to a given ski deformation.

  20. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    Science.gov (United States)

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.